[go: up one dir, main page]
More Web Proxy on the site http://driver.im/

KR20150025365A - Method for Producing 3-Dimensional Contents in Terminal, and the Terminal thereof - Google Patents

Method for Producing 3-Dimensional Contents in Terminal, and the Terminal thereof Download PDF

Info

Publication number
KR20150025365A
KR20150025365A KR20130102959A KR20130102959A KR20150025365A KR 20150025365 A KR20150025365 A KR 20150025365A KR 20130102959 A KR20130102959 A KR 20130102959A KR 20130102959 A KR20130102959 A KR 20130102959A KR 20150025365 A KR20150025365 A KR 20150025365A
Authority
KR
South Korea
Prior art keywords
dimensional
dimensional object
terminal
content
gesture
Prior art date
Application number
KR20130102959A
Other languages
Korean (ko)
Inventor
하현주
임영재
최성우
정태식
Original Assignee
주식회사 엘지유플러스
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 주식회사 엘지유플러스 filed Critical 주식회사 엘지유플러스
Priority to KR20130102959A priority Critical patent/KR20150025365A/en
Publication of KR20150025365A publication Critical patent/KR20150025365A/en

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/016Input arrangements with force or tactile feedback as computer generated output to the user
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/017Gesture based interaction, e.g. based on a set of recognized hand gestures
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/048Interaction techniques based on graphical user interfaces [GUI]
    • G06F3/0481Interaction techniques based on graphical user interfaces [GUI] based on specific properties of the displayed interaction object or a metaphor-based environment, e.g. interaction with desktop elements like windows or icons, or assisted by a cursor's changing behaviour or appearance
    • G06F3/04815Interaction with a metaphor-based environment or interaction object displayed as three-dimensional, e.g. changing the user viewpoint with respect to the environment or object
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/048Interaction techniques based on graphical user interfaces [GUI]
    • G06F3/0481Interaction techniques based on graphical user interfaces [GUI] based on specific properties of the displayed interaction object or a metaphor-based environment, e.g. interaction with desktop elements like windows or icons, or assisted by a cursor's changing behaviour or appearance
    • G06F3/0482Interaction with lists of selectable items, e.g. menus
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/14Digital output to display device ; Cooperation and interconnection of the display device with other functional units
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/16Sound input; Sound output

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • General Engineering & Computer Science (AREA)
  • Human Computer Interaction (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Health & Medical Sciences (AREA)
  • Audiology, Speech & Language Pathology (AREA)
  • General Health & Medical Sciences (AREA)
  • User Interface Of Digital Computer (AREA)
  • Processing Or Creating Images (AREA)

Abstract

A method for producing three-dimensional content in a terminal is disclosed. According to an embodiment, a method for producing three-dimensional content in a terminal comprises the steps of: sensing a predetermined gesture for a two-dimensional object displayed on a screen, wherein the gesture is input from a user; providing visual feedback that the two-dimensional object is changed in response to the gesture; generating a three-dimensional object corresponding to the input of the gesture with respect to the two-dimensional object; sensing a predetermined gesture for the three-dimensional object; and adding an effect parameter to the three-dimensional object. Therefore, the method allows a terminal to generate a three-dimensional object from a two-dimensional object, display text on the three-dimensional object, and transform the shape of the three-dimensional object when a gesture for the three-dimensional object is sensed.

Description

TECHNICAL FIELD [0001] The present invention relates to a method and a terminal for producing three-dimensional content in a terminal,

The following embodiments relate to a method for producing three-dimensional content in a terminal.

As the device manufacturing technology develops and application production and utilization increase, it is possible to draw a picture with the terminal. Recently, 3D broadcasting contents are being produced in large quantities, 3D TV production and demand are increasing, and 3D is attracted to the public due to development of 3D printer.

Korean Patent Publication No. 10-2010-0119940, entitled " Apparatus and Method for Generating 3D Content in a Portable Terminal, "discloses a configuration for transforming a model of a three-dimensional object according to a touch input intensity of a user.

Embodiments can generate a three-dimensional object from a two-dimensional object in a terminal, display text on the three-dimensional object, and if a gesture for the three-dimensional object is detected, the appearance of the three-dimensional object can be modified.

In addition, embodiments may share three-dimensional content with a neighboring terminal, and may make three-dimensional content together with a neighboring terminal.

A method for producing three-dimensional content in a terminal according to one side includes sensing a gesture of a two-dimensional object displayed on a screen, and providing visual feedback that the two-dimensional object changes according to the gesture ; Generating a three-dimensional object corresponding to the input of the gesture for the two-dimensional object; Sensing a gesture for the three-dimensional object and adding an effect parameter to the three-dimensional object; And storing the three-dimensional object to which the effect parameter is added as three-dimensional content in a memory means.

The method of producing three-dimensional content in a terminal according to an exemplary embodiment may further include providing an editing tool menu to the user to add the three-dimensional content to another content.

The method may further include providing an editing tool menu to the user to display the text input from the user in the three-dimensional content.

At this time, in the 3D content, a predetermined action is performed on the 3D content by the gesture, and at least one of visual, auditory, and tactile feedback can be provided.

The method may further include the step of providing an editing tool menu to the user, which can invoke one or more three-dimensional contents stored in the memory means and combine the one or more three-dimensional contents to generate new three-dimensional contents .

Transmitting a sharing request message to a neighboring terminal to share the 3D object or the 3D object to which the effect parameter is added with the neighboring terminal; And receiving a changed effect parameter of the three-dimensional object or the three-dimensional object to which the effect parameter is added when there is a response to the sharing request from the peripheral terminal.

Further, the effect parameter may be any one of bouncing, rolling, rotating, and random movement of the three-dimensional object.

In addition, the effect parameter may be any one of audio, video, and image data reproduced together with the three-dimensional object.

There is a computer-readable recording medium in which a program for executing a method for producing three-dimensional content in the terminal is recorded.

A terminal according to one side senses a gesture for a two-dimensional object displayed on a screen, and provides visual feedback that the two-dimensional object changes according to the gesture; A three-dimensional object generating unit for generating a three-dimensional object corresponding to the input of the gesture with respect to the two-dimensional object; An effect parameter adding unit for sensing a gesture of the 3D object and adding an effect parameter to the 3D object; And a storage unit for storing the three-dimensional object to which the effect parameter is added in the memory means as three-dimensional content.

The terminal according to an exemplary embodiment may provide the user with an editing tool menu for adding the three-dimensional content to other contents.

Also, an editing tool menu may be provided to the user to display the text entered from the user in the three-dimensional content.

In addition, in the 3D content, a predetermined action is performed on the 3D content by the gesture, and at least one of visual, auditory, or tactile feedback may be provided.

Also, it is possible to provide the user with an editing tool menu capable of calling one or more three-dimensional contents stored in the memory means and combining the one or more three-dimensional contents to generate new three-dimensional contents.

Embodiments can generate a three-dimensional object from a two-dimensional object in a terminal, display text on the three-dimensional object, and if a gesture for the three-dimensional object is detected, the appearance of the three-dimensional object can be modified.

In addition, embodiments may share three-dimensional content with a neighboring terminal, and may make three-dimensional content together with a neighboring terminal.

FIG. 1 is a flowchart for explaining a method for producing three-dimensional content in a terminal according to an exemplary embodiment of the present invention.
FIG. 2 is a diagram for explaining visual feedback in which a two-dimensional object changes in a method of producing three-dimensional content in a terminal according to an exemplary embodiment.
FIG. 3 is a diagram for explaining text display of three-dimensional contents and rotation of three-dimensional contents in a method for producing three-dimensional contents in a terminal according to an exemplary embodiment.
FIG. 4 is a diagram for explaining a method for producing a new three-dimensional content in a method for producing three-dimensional content in a terminal according to an exemplary embodiment.
5 is a block diagram illustrating a configuration of a terminal according to an embodiment.

Hereinafter, embodiments will be described in detail with reference to the accompanying drawings. However, the present invention is not limited to or limited by the embodiments. In addition, the same reference numerals shown in the drawings denote the same members.

FIG. 1 is a flowchart for explaining a method for producing three-dimensional content in a terminal according to an exemplary embodiment of the present invention.

According to an exemplary embodiment of the present invention, a method of generating three-dimensional content can detect a gesture of a two-dimensional object displayed on a screen and provide visual feedback that the two-dimensional object changes according to a gesture 110). The two-dimensional object may include a figure generated by a user from a simple figure such as a rectangle, a semicircle, a right-angled triangle, or the like. In step 110, the two-dimensional object may be a pre-stored two-dimensional object or a two-dimensional object generated by the user. For example, to create a three-dimensional cylindrical object through a rotating gesture of a two-dimensional object, a user can call up a pre-stored rectangle, and the user can directly draw a rectangle. In addition, all or part of the image taken by the user can be utilized as a two-dimensional object.

A method for producing three-dimensional content according to an exemplary embodiment may generate a three-dimensional object corresponding to input of a gesture for a two-dimensional object (120). When a two-dimensional object of a rectangular, semicircle, or right-angled triangle has an input of a rotation gesture, the three-dimensional object generated thereby may be a cylinder, a sphere, or a cone. The rotation gesture may be a left-right rotation or an up-down rotation, and in addition, a three-dimensional object may be generated through various gestures combining left, right, up and down.

A method for producing three-dimensional content according to an exemplary embodiment may detect a gesture for a three-dimensional object, and add an effect parameter to the three-dimensional object (130). Here, the effect parameter may be any one of bouncing, rolling, rotating, and random movement of the three-dimensional object. For example, if the three-dimensional object is a "sphere ", then the effect parameter of the bounce may cause the " sphere" to bounce like a ball, and the rolling effect parameter may roll "sphere ". When adding effect parameters of rotation to a "top" -shaped three-dimensional object, the "top" -shaped three-dimensional object can continue to rotate. If you add a random motion parameter to a 3D object, you can move it only in the direction you specify. For example, if a user specifies a random motion to move in a half-moon shape, the three-dimensional object can only move in half-moon shape until there is another input.

Further, the effect parameter may be any one of audio, video, and image data reproduced together with the three-dimensional object. For example, when the three-dimensional object is "book ", a drum sound can be added to the three-dimensional object. When the 3D object is "TV ", it is possible to add a moving image file to the 3D object to add the effect parameter to the 3D object as if broadcast is coming out on the TV. If the three-dimensional object is a "frame ", an image file such as a photograph may be added to the" frame "to add the effect parameter to the three-

A method for producing three-dimensional content according to an embodiment may store a three-dimensional object to which an effect parameter is added in a memory means as a three-dimensional content (140). The memory means may comprise internal storage embedded in the terminal and may include external memory (e.g., an SD card). In addition, the memory means may comprise a web hard.

According to an exemplary embodiment of the present invention, a method for producing three-dimensional content can provide a user with an editing tool menu for adding three-dimensional content to other contents. Here, the other contents may include a 3D photograph or a 3D video. Also, it is possible to provide a user with an editing tool menu capable of loading one or more three-dimensional contents stored in the memory means and combining one or more three-dimensional contents to create new three-dimensional contents. Here, the editing tool menu may include "import", "save", "edit", and "transfer".

Through "import", one or more three-dimensional contents stored in the memory means can be brought to the screen. For example, a 3D photograph stored in the memory means, other contents of the 3D movie, or 3D contents generated by the user can be called up. When a 3D photo or 3D video is loaded, the 3D content generated by the user can be added to the 3D picture or the 3D video. The 3D picture becomes the background, and the user can add the 3D content generated by the user to the desired position.

Through "save ", the generated three-dimensional content can be stored in the memory means.

Through "edit", it is possible to edit the imported three-dimensional content or the three-dimensional content created by the user. "Edit" may include submenus of "decorating" and "text input ". You can change the graphic effect of 3D content through "decorating". For example, the color of the three-dimensional content can be changed, and the pattern (for example, a flower pattern) can be added to the three-dimensional content. You can enter text into 3-D content via "Text Input". In addition, the font of the input text, the font size, and the like can be changed.

"Transmit" can transmit three-dimensional content to another terminal. For example, three-dimensional content can be transmitted to another terminal through short-range wireless communication (for example, NFC), and three-dimensional content can be attached to a mail or message and transmitted to another terminal. In addition, the 3D content can be transmitted to another terminal using the instant messaging service.

The editing tool menu listed above is only an example according to an embodiment, and the editing tool menu is not limited to the above.

According to an exemplary embodiment of the present invention, a method of generating three-dimensional content can provide a user with an editing tool menu that displays text input from a user in three-dimensional content. The editing tool menu may be included in the above-described editing tool menu.

According to one embodiment, the 3D content produced in accordance with an embodiment of the present invention may have a predetermined action performed on the 3D content by the gesture, so that any one or more of visual, auditory, Can be provided. For example, if there is text in the three-dimensional content and there is input of the rotation gesture in the three-dimensional content, visual feedback may be provided in which the three-dimensional content rotates and the text rotates together. In another example, when a user's gesture (e.g. touch and hold for a ball for three seconds) is entered for the three-dimensional content "ball", a "ball" pops and visual feedback, such as a firecracker, Auditory feedback may be provided in which firecrackers or music are played. In addition, if there is a user's gesture in the three-dimensional content "ball ", a" ball "pops and tactile feedback such as vibration can be provided to the user.

According to one embodiment, the 3D content produced according to an embodiment of the present invention may include an indicator for guiding the user's gesture so that a predetermined action included in the 3D content is performed. In addition, when a gesture according to the indicator is input, a predetermined action is performed on the three-dimensional content to provide at least one of visual, auditory, or tactile feedback. The indicator may be entered by the user. For example, a text type indicator of "Touch the ball" may be displayed on the three-dimensional content "ball ", and when the user touches" ball " The music can be executed.

According to an exemplary embodiment of the present invention, a method for creating a 3D content may transmit a sharing request message to a neighboring terminal to share a 3D object or a 3D object to which an effect parameter is added to a neighboring terminal. The terminal can transmit a sharing request message to the neighboring terminal via the server. The terminal can transmit a sharing request message to share the three-dimensional object or the three-dimensional object to which the effect parameter is added to the server, and the server can transmit the sharing request message to the neighboring terminal. In addition, the terminal may transmit a sharing request message to share a three-dimensional object or a three-dimensional object to which an effect parameter is added to a peripheral terminal through short-range wireless communication.

A method of producing three-dimensional content according to an exemplary embodiment may receive a three-dimensional object or a changed effect parameter of a three-dimensional object to which the effect parameter is added when there is a response to a sharing request from a peripheral terminal. When the peripheral terminal receives a three-dimensional object or a three-dimensional object to which an effect parameter is added, the user of the peripheral terminal can add a new three-dimensional object or a new effect parameter to the three-dimensional object to which the effect parameter is added. The peripheral terminal can transmit a new effect parameter to the terminal, and the terminal can add a new effect parameter to the three-dimensional object to which the three-dimensional object or the effect parameter is added. For example, when a peripheral terminal adds a rotation effect parameter to a three-dimensional object, a rotation effect parameter is added to the three-dimensional object of the terminal, so that the three-dimensional object of the terminal rotates.

According to one embodiment, a terminal can play a game by sharing three-dimensional content produced according to an embodiment of the present invention with a peripheral terminal. The terminal can transmit a game request message to the peripheral terminal. The peripheral terminal can transmit a response message to the terminal in response to the game request message. The peripheral terminal transmits a response message to the terminal, and when the terminal and the peripheral terminal are paired, the terminal and the peripheral terminal can start the game. For example, a terminal and a peripheral terminal can play a game in which a ball is inserted first into the opponent's goalpost using a ball. The "ball" may be 3D content produced by the user. The terminal or peripheral terminal may add user-generated three-dimensional content related to the game during the game. For example, the terminal or peripheral terminal may set an obstacle such as "stone " to interfere with the progress of the" ball. &Quot; Another example is to provide a game in which three-dimensional content "room" is shared with other users to find another three-dimensional content "frame" under the "cushion" located in the "room". Other users can see "room" in the top, bottom, left, right, front and back directions through the gesture input for "room". The "cushion" is behind the "sofa" and the "cushion" can only be seen when the "room" is seen from the right. Other users who have found a "cushion" can move the "cushion" over the "couch" with a gesture so that they can see the "frame". While moving the "cushion", other users can be provided with visual feedback that moves the "cushion" over the "sofa". Other users who have found a "frame" can enter a gesture in the "frame". A predetermined action is performed on the "frame" by the input gesture so that visual, auditory, or tactile feedback can be provided that the game has ended.

FIG. 2 is a diagram for explaining visual feedback in which a two-dimensional object changes in a method of producing three-dimensional content in a terminal according to an exemplary embodiment.

Referring to FIG. 2, a rectangular two-dimensional object is displayed on the screen 210 of the terminal. The terminal may sense a predetermined gesture for the two-dimensional object. The predetermined gesture may include a clockwise rotation, a counterclockwise rotation, a tab, and a long tap. Once a gesture for the two-dimensional object is sensed, the terminal can provide visual feedback that the two-dimensional object changes according to the gesture.

A three-dimensional object corresponding to the input of the gesture can be generated while providing visual feedback that the two-dimensional object changes. For example, when a rotational gesture in a clockwise direction is input to a rectangle to generate a cylindrical three-dimensional object, the rectangular two-dimensional object is displayed on the screen 230 through the three- A cylinder that is a dimension object can be created. In addition, when a gesture of a long tap is input to a rectangle, a three-dimensional object of a rectangular parallelepiped can be generated.

FIG. 3 is a diagram for explaining text display of three-dimensional contents and rotation of three-dimensional contents in a method for producing three-dimensional contents in a terminal according to an exemplary embodiment.

Referring to FIG. 3, three-dimensional contents are displayed on a screen 310 of a terminal. "Congratulations" is displayed in the three-dimensional content. The terminal may provide the user with an editing tool menu that displays text entered from the user in the three-dimensional content. For example, if there is an input of a gesture in three-dimensional content, a text input window can be displayed, and a user can input text into three-dimensional content through a text input window. Through the menu of "text input ", the text can be displayed in the three-dimensional content.

In screen 310, if there is an input of a user's rotation gesture, the three-dimensional content can rotate in the direction of the rotation gesture. For example, if there is a rotation gesture input in the clockwise direction of the user, the three-dimensional content can be rotated clockwise. Here, the text displayed on the three-dimensional content can move with the three-dimensional content.

The screen 320 shows the text displayed on the three-dimensional content rotating together with the three-dimensional content when the three-dimensional content rotates. On screen 310, the text displayed in the three-dimensional content is shown in front. In the screen 320, the three-dimensional content is rotated in a clockwise direction by the rotation gesture so that only a part of the text is displayed. In the screen 320, when the three-dimensional content further rotates clockwise and rotates 180 degrees, the back side of the three-dimensional content can be seen, and the text on the screen 310 can not be seen.

FIG. 4 is a diagram for explaining a method for producing a new three-dimensional content in a method for producing three-dimensional content in a terminal according to an exemplary embodiment.

Referring to FIG. 4, a predetermined gesture for a two-dimensional object is input to a screen 410 of the terminal, and a three-dimensional object is generated. An editing tool menu may be displayed at the bottom of the screen 410. Three-dimensional content can be loaded through the editing tool menu.

In the screen 420, the three-dimensional content loaded at the lower end of the three-dimensional object generated by the user is displayed. The size of the imported 3D content can change through gestures. For example, if a pinch-out is performed on the loaded three-dimensional content, the size of the content may be small, and if the pinch-in is performed, the size of the content may be large. Also, the three-dimensional content loaded through the gesture can be moved.

The screen 430 shows a state in which the loaded three-dimensional content is moved. For example, the loaded three-dimensional content can be touched and dragged and then dropped at a desired position to move the three-dimensional content.

According to one embodiment, new three-dimensional content may be produced with the peripheral terminal. For example, suppose that the terminal and the peripheral terminal produce new three-dimensional contents of the robot. The terminal fabricates the head and the trunk, and the peripheral terminal fabricates the arm and the leg. The terminal can receive the arms and legs from peripheral terminals to produce robots. Similarly, the peripheral terminal can receive the head and the torso from the terminal to produce the robot.

5 is a block diagram illustrating a configuration of a terminal according to an embodiment.

5, a terminal 500 according to an embodiment includes a visual feedback providing unit 510, a three-dimensional object generating unit 520, an effect parameter adding unit 530, and a storing unit 540 .

The visual feedback unit 510 may detect a predetermined gesture of a two-dimensional object displayed on a screen input from a user and provide a visual feedback that the two-dimensional object changes according to the gesture .

The three-dimensional object generation unit 520 can generate a three-dimensional object corresponding to the input of the gesture with respect to the two-dimensional object.

The effect parameter adding unit 530 may detect a predetermined gesture of the three-dimensional object and add the effect parameter to the three-dimensional object. Here, the effect parameter may be any one of bouncing, rolling, rotating, and random movement of the three-dimensional object. Further, the effect parameter may be any one of audio, video, and image data reproduced together with the three-dimensional object.

The storage unit 540 may store the three-dimensional object to which the effect parameter is added in the memory means as three-dimensional content.

A terminal according to an exemplary embodiment may provide a user with an editing tool menu for adding three-dimensional content to other contents. Further, it is possible to provide the user with an editing tool menu for displaying the text input from the user in the three-dimensional content. Here, the three-dimensional content displayed by the user may be provided with visual feedback that the text moves with the three-dimensional content by the rotation gesture input from the user.

A terminal according to an exemplary embodiment can provide a user with an editing tool menu that can invoke one or more three-dimensional contents stored in a memory means and combine one or more three-dimensional contents to generate new three-dimensional contents.

1 to 4 may be applied to each block shown in FIG. 5, so that a detailed description thereof will be omitted.

The apparatus described above may be implemented as a hardware component, a software component, and / or a combination of hardware components and software components. For example, the apparatus and components described in the embodiments may be implemented within a computer system, such as, for example, a processor, a controller, an arithmetic logic unit (ALU), a digital signal processor, a microcomputer, a field programmable gate array (FPGA) , A programmable logic unit (PLU), a microprocessor, or any other device capable of executing and responding to instructions. The processing device may execute an operating system (OS) and one or more software applications running on the operating system. The processing device may also access, store, manipulate, process, and generate data in response to execution of the software. For ease of understanding, the processing apparatus may be described as being used singly, but those skilled in the art will recognize that the processing apparatus may have a plurality of processing elements and / As shown in FIG. For example, the processing unit may comprise a plurality of processors or one processor and one controller. Other processing configurations are also possible, such as a parallel processor.

The software may include a computer program, code, instructions, or a combination of one or more of the foregoing, and may be configured to configure the processing device to operate as desired or to process it collectively or collectively Device can be commanded. The software and / or data may be in the form of any type of machine, component, physical device, virtual equipment, computer storage media, or device , Or may be permanently or temporarily embodied in a transmitted signal wave. The software may be distributed over a networked computer system and stored or executed in a distributed manner. The software and data may be stored on one or more computer readable recording media.

The method according to an embodiment may be implemented in the form of a program command that can be executed through various computer means and recorded in a computer-readable medium. The computer-readable medium may include program instructions, data files, data structures, and the like, alone or in combination. The program instructions to be recorded on the medium may be those specially designed and configured for the embodiments or may be available to those skilled in the art of computer software. Examples of computer-readable media include magnetic media such as hard disks, floppy disks and magnetic tape; optical media such as CD-ROMs and DVDs; magnetic media such as floppy disks; Magneto-optical media, and hardware devices specifically configured to store and execute program instructions such as ROM, RAM, flash memory, and the like. Examples of program instructions include machine language code such as those produced by a compiler, as well as high-level language code that can be executed by a computer using an interpreter or the like. The hardware devices described above may be configured to operate as one or more software modules to perform the operations of the embodiments, and vice versa.

While the present invention has been particularly shown and described with reference to exemplary embodiments thereof, it is to be understood that the invention is not limited to the disclosed exemplary embodiments. For example, it is to be understood that the techniques described may be performed in a different order than the described methods, and / or that components of the described systems, structures, devices, circuits, Lt; / RTI > or equivalents, even if it is replaced or replaced.

Therefore, other implementations, other embodiments, and equivalents to the claims are also within the scope of the following claims.

500: terminal

Claims (14)

Sensing a gesture for a two-dimensional object displayed on a screen and providing visual feedback that the two-dimensional object changes according to the gesture;
Generating a three-dimensional object corresponding to the input of the gesture for the two-dimensional object;
Sensing a gesture for the three-dimensional object and adding an effect parameter to the three-dimensional object; And
Storing the three-dimensional object to which the effect parameter is added as three-dimensional content in a memory means
Dimensional content in a terminal.
The method according to claim 1,
Providing an editing tool menu to the user to add the three-dimensional content to other content
≪ / RTI >
A method for producing three-dimensional content in a terminal.
The method according to claim 1,
Providing an editing tool menu to the user to display text entered from the user in the three-dimensional content;
≪ / RTI >
A method for producing three-dimensional content in a terminal.
The method according to claim 1,
The three-
Wherein the predetermined action is performed on the three-dimensional content by the gesture so that at least one of visual, auditory, or tactile feedback is provided,
A method for producing three-dimensional content in a terminal.
The method according to claim 1,
Providing an editing tool menu to the user that can invoke one or more three-dimensional content stored in the memory means and combine the one or more three-dimensional content to generate new three-dimensional content;
≪ / RTI >
A method for producing three-dimensional content in a terminal.
The method according to claim 1,
Transmitting a sharing request message to a peripheral terminal to share the three-dimensional object or the three-dimensional object to which the effect parameter is added with the peripheral terminal; And
Receiving a changed effect parameter of the three-dimensional object or the three-dimensional object to which the effect parameter is added when there is a response to the sharing request from the peripheral terminal
≪ / RTI >
A method for producing three-dimensional content in a terminal.
The method according to claim 1,
The effect parameters include:
Wherein the bouncing, rolling, rotating, and random movement of the three-dimensional object,
A method for producing three-dimensional content in a terminal.
The method according to claim 1,
The effect parameters include:
Dimensional object, which is one of audio, video, and image data reproduced together with the three-dimensional object,
A method for producing three-dimensional content in a terminal.
A computer-readable recording medium having recorded thereon a program for executing the method according to any one of claims 1 to 8.
A visual feedback providing means for sensing a gesture for a two-dimensional object displayed on a screen and providing visual feedback that the two-dimensional object changes according to the gesture;
A three-dimensional object generating unit for generating a three-dimensional object corresponding to the input of the gesture with respect to the two-dimensional object;
An effect parameter adding unit for sensing a gesture of the 3D object and adding an effect parameter to the 3D object; And
Dimensional object to which the effect parameter is added, as a three-dimensional content,
Lt; / RTI >
11. The method of claim 10,
And providing the user with an editing tool menu for adding the three-dimensional content to other contents,
terminal.
11. The method of claim 10,
And providing the user with an editing tool menu for displaying the text input from the user in the three-
terminal.
10. The method of claim 9,
The three-
Wherein the predetermined action is performed on the three-dimensional content by the gesture so that at least one of visual, auditory, or tactile feedback is provided,
terminal.
11. The method of claim 10,
Providing a user with an editing tool menu that can invoke one or more three-dimensional content stored in the memory means and combine the one or more three-dimensional content to generate new three-
terminal.
KR20130102959A 2013-08-29 2013-08-29 Method for Producing 3-Dimensional Contents in Terminal, and the Terminal thereof KR20150025365A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
KR20130102959A KR20150025365A (en) 2013-08-29 2013-08-29 Method for Producing 3-Dimensional Contents in Terminal, and the Terminal thereof

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
KR20130102959A KR20150025365A (en) 2013-08-29 2013-08-29 Method for Producing 3-Dimensional Contents in Terminal, and the Terminal thereof

Publications (1)

Publication Number Publication Date
KR20150025365A true KR20150025365A (en) 2015-03-10

Family

ID=53021614

Family Applications (1)

Application Number Title Priority Date Filing Date
KR20130102959A KR20150025365A (en) 2013-08-29 2013-08-29 Method for Producing 3-Dimensional Contents in Terminal, and the Terminal thereof

Country Status (1)

Country Link
KR (1) KR20150025365A (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR101715651B1 (en) * 2016-01-28 2017-03-14 모젼스랩(주) System for providing interaction contents users location-based using virtual cloud
WO2020141788A1 (en) * 2019-01-04 2020-07-09 Samsung Electronics Co., Ltd. Home appliance and control method thereof

Cited By (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR101715651B1 (en) * 2016-01-28 2017-03-14 모젼스랩(주) System for providing interaction contents users location-based using virtual cloud
WO2020141788A1 (en) * 2019-01-04 2020-07-09 Samsung Electronics Co., Ltd. Home appliance and control method thereof
KR20200085184A (en) * 2019-01-04 2020-07-14 삼성전자주식회사 Home appliance and control method thereof
US11262962B2 (en) 2019-01-04 2022-03-01 Samsung Electronics Co., Ltd. Home appliance and control method thereof
US11599320B2 (en) 2019-01-04 2023-03-07 Samsung Electronics Co., Ltd. Home appliance and control method thereof

Similar Documents

Publication Publication Date Title
CN107852573B (en) Mixed reality social interactions
US11282264B2 (en) Virtual reality content display method and apparatus
US10725297B2 (en) Method and system for implementing a virtual representation of a physical environment using a virtual reality environment
US10726625B2 (en) Method and system for improving the transmission and processing of data regarding a multi-user virtual environment
US11412159B2 (en) Method and apparatus for generating three-dimensional particle effect, and electronic device
KR102052567B1 (en) Virtual 3D Video Generation and Management System and Method
AU2014201003A1 (en) Display apparatus and display method for displaying a polyhedral graphical user interface
CN110321048A (en) The processing of three-dimensional panorama scene information, exchange method and device
WO2018086532A1 (en) Display control method and apparatus for surveillance video
KR102582407B1 (en) Methods, systems, and media for rendering immersive video content with foveated meshes
JP2014021570A (en) Moving image generation device
US20160239095A1 (en) Dynamic 2D and 3D gestural interfaces for audio video players capable of uninterrupted continuity of fruition of audio video feeds
KR102257132B1 (en) Methods, systems and media for creating and rendering immersive video content
Papaefthymiou et al. Mobile Virtual Reality featuring a six degrees of freedom interaction paradigm in a virtual museum application
KR20150025365A (en) Method for Producing 3-Dimensional Contents in Terminal, and the Terminal thereof
JP2017084215A (en) Information processing system, control method thereof, and program
JP2015170232A (en) Information processing device, method of controlling the same, and program
US11511190B2 (en) Merge computer simulation sky box with game world
JP6357412B2 (en) Information processing apparatus, information processing system, information processing method, and program
WO2022235527A1 (en) Create and remaster computer simulation skyboxes
JP2020074108A (en) Information processing system, control method thereof, and program
US20240149159A1 (en) Capturing computer game output mid-render for 2d to 3d conversion, accessibility, and other effects
US20230215101A1 (en) Method, device, and non-transitory computer-readable recording medium to create 3d object for virtual space
KR102533209B1 (en) Method and system for creating dynamic extended reality content
JP6638326B2 (en) Information processing system, control method thereof, and program

Legal Events

Date Code Title Description
WITN Withdrawal due to no request for examination