US20230152884A1 - Animation production system - Google Patents
Animation production system Download PDFInfo
- Publication number
- US20230152884A1 US20230152884A1 US18/156,866 US202318156866A US2023152884A1 US 20230152884 A1 US20230152884 A1 US 20230152884A1 US 202318156866 A US202318156866 A US 202318156866A US 2023152884 A1 US2023152884 A1 US 2023152884A1
- Authority
- US
- United States
- Prior art keywords
- user
- character
- track
- controller
- recording
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Abandoned
Links
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F3/00—Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
- G06F3/01—Input arrangements or combined input and output arrangements for interaction between user and computer
- G06F3/011—Arrangements for interaction with the human body, e.g. for user immersion in virtual reality
- G06F3/012—Head tracking input arrangements
-
- A—HUMAN NECESSITIES
- A63—SPORTS; GAMES; AMUSEMENTS
- A63F—CARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
- A63F13/00—Video games, i.e. games using an electronically generated display having two or more dimensions
- A63F13/25—Output arrangements for video game devices
-
- A—HUMAN NECESSITIES
- A63—SPORTS; GAMES; AMUSEMENTS
- A63F—CARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
- A63F13/00—Video games, i.e. games using an electronically generated display having two or more dimensions
- A63F13/55—Controlling game characters or game objects based on the game progress
- A63F13/56—Computing the motion of game characters with respect to other game characters, game objects or elements of the game scene, e.g. for simulating the behaviour of a group of virtual soldiers or for path finding
-
- A—HUMAN NECESSITIES
- A63—SPORTS; GAMES; AMUSEMENTS
- A63F—CARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
- A63F13/00—Video games, i.e. games using an electronically generated display having two or more dimensions
- A63F13/60—Generating or modifying game content before or while executing the game program, e.g. authoring tools specially adapted for game development or game-integrated level editor
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F1/00—Details not covered by groups G06F3/00 - G06F13/00 and G06F21/00
- G06F1/16—Constructional details or arrangements
- G06F1/1613—Constructional details or arrangements for portable computers
- G06F1/163—Wearable computers, e.g. on a belt
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F3/00—Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
- G06F3/01—Input arrangements or combined input and output arrangements for interaction between user and computer
- G06F3/011—Arrangements for interaction with the human body, e.g. for user immersion in virtual reality
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F3/00—Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
- G06F3/01—Input arrangements or combined input and output arrangements for interaction between user and computer
- G06F3/03—Arrangements for converting the position or the displacement of a member into a coded form
- G06F3/0304—Detection arrangements using opto-electronic means
- G06F3/0325—Detection arrangements using opto-electronic means using a plurality of light emitters or reflectors or a plurality of detectors forming a reference frame from which to derive the orientation of the object, e.g. by triangulation or on the basis of reference deformation in the picked up image
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F3/00—Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
- G06F3/01—Input arrangements or combined input and output arrangements for interaction between user and computer
- G06F3/03—Arrangements for converting the position or the displacement of a member into a coded form
- G06F3/033—Pointing devices displaced or positioned by the user, e.g. mice, trackballs, pens or joysticks; Accessories therefor
- G06F3/0346—Pointing devices displaced or positioned by the user, e.g. mice, trackballs, pens or joysticks; Accessories therefor with detection of the device orientation or free movement in a 3D space, e.g. 3D mice, 6-DOF [six degrees of freedom] pointers using gyroscopes, accelerometers or tilt-sensors
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F3/00—Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
- G06F3/16—Sound input; Sound output
- G06F3/165—Management of the audio stream, e.g. setting of volume, audio stream path
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T13/00—Animation
- G06T13/20—3D [Three Dimensional] animation
- G06T13/40—3D [Three Dimensional] animation of characters, e.g. humans, animals or virtual beings
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T13/00—Animation
- G06T13/80—2D [Two Dimensional] animation, e.g. using sprites
-
- A—HUMAN NECESSITIES
- A63—SPORTS; GAMES; AMUSEMENTS
- A63F—CARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
- A63F2300/00—Features of games using an electronically generated display having two or more dimensions, e.g. on a television screen, showing representations related to the game
- A63F2300/60—Methods for processing data by generating or executing the game program
- A63F2300/66—Methods for processing data by generating or executing the game program for rendering three dimensional images
- A63F2300/6607—Methods for processing data by generating or executing the game program for rendering three dimensional images for animating game characters, e.g. skeleton kinematics
Definitions
- the present invention relates to an animation production system.
- Virtual cameras are arranged in a virtual space (see Patent Document 1 ).
- the present invention has been made in view of such a background, and is intended to provide a technology capable of capturing animations in a virtual space.
- the principal invention for solving the above-described problem is an animation production method that provides a virtual space in which a given object is placed, the method comprising: detecting an operation of a user equipped with a head mounted display; controlling a movement of an object based on the detected operation of the user; shooting the movement of the object; storing an action data relating to the movement of the shot object in a first track; and storing audio from the user in a second track.
- animations can be captured in a virtual space.
- FIG. 1 is a diagram illustrating an example of a virtual space displayed on a head mount display (HMD) mounted by a user in an animation production system of the present embodiment;
- HMD head mount display
- FIG. 2 is a diagram illustrating an example of the overall configuration of an animation production system 300 according to an embodiment of the present invention.
- FIG. 3 shows a schematic view of the appearance of a head mount display (hereinafter referred to as an HMD) 110 according to the present embodiment.
- HMD head mount display
- FIG. 4 shows a schematic view of the outside of the controller 210 according to the present embodiment.
- FIG. 5 shows a functional configuration diagram of the HMD 110 according to the present embodiment.
- FIG. 6 shows a functional configuration diagram of the controller 210 according to the present embodiment.
- FIG. 7 shows a functional configuration diagram of an image producing device 310 according to the present embodiment.
- FIG. 8 is a flow chart illustrating an example of a track generation process according to an embodiment of the present invention.
- FIG. 9 ( a ) is a diagram illustrating a track generation according to an embodiment of the present invention.
- FIG. 9 ( b ) is a diagram illustrating a track generation according to an embodiment of the present invention.
- An animation production method has the following configuration.
- An animation production method that provides a virtual space in which a given object is placed, the method comprising:
- the method further comprising sharing the first track or the second track at a plurality of user terminals.
- FIG. 1 is a diagram illustrating an example of a virtual space displayed on a head mount display (HMD) mounted by a user in an animation production system of the present embodiment.
- a character 4 and a camera 3 are disposed in the virtual space 1 , and a character 4 is shot using the camera 3 .
- the photographer 2 is disposed, and the camera 3 is virtually operated by the photographer 2 .
- the animation production system of the present embodiment as shown in FIG.
- a user makes an animation by placing a character 4 and a camera 3 while viewing the virtual space 1 from a bird's perspective with a TPV (Third Person's View), taking a character 4 with an FPV (First Person View; first person support) as a photographer 2 , and performing a character 4 with an FPV.
- a plurality of characters in the example shown in FIG. 1 , a character 4 and a character 5 ) can be disposed, and the user can perform the performance while possessing a character 4 and a character 5 , respectively. That is, in the animation production system of the present embodiment, one can play a number of roles (roles).
- the camera 2 can be virtually operated as the photographer 2 , natural camera work can be realized and the representation of the movie to be shot can be enriched.
- FIG. 2 is a diagram illustrating an example of the overall configuration of an animation production system 300 according to an embodiment of the present invention.
- the animation production system 300 may comprise, for example, a user terminal 410 A comprising an HMD 110 , a controller 210 , and an image generator 310 that functions as a host computer.
- An infrared camera (not shown) or the like can also be added to the animation production system 300 for detecting the position, orientation and slope of the HMD 110 or controller 210 .
- These devices may be connected to each other by wired or wireless means.
- each device may be equipped with a USB port to establish communication by cable connection, or communication may be established by wired or wireless, such as HDMI, wired LAN, infrared, BluetoothTM, WiFiTM.
- the image generating device 310 may be a PC, a game machine, a portable communication terminal, or any other device having a calculation processing function.
- the user terminal 410 A may also connect other user terminals 410 B, 410 C via a network or the like.
- Other user terminals 410 B, 410 C may include an HMD and/or controller as well as a user terminal 410 A and may include at least a computer, a game machine, a portable communication terminal, or any other device having a computational processing function.
- FIG. 3 shows a schematic view of the appearance of a head mount display (hereinafter referred to as HMD) 110 according to the present embodiment.
- FIG. 5 shows a functional configuration diagram of the HMD 110 according to the present embodiment.
- the HMD 110 is mounted on the user's head and includes a display panel 120 for placement in front of the user's left and right eyes.
- a display panel 120 displays a left-eye image and a right-eye image, which can provide the user with a three-dimensional image by utilizing the visual difference of both eyes. If left- and right-eye images can be displayed, a left-eye display and a right-eye display can be provided separately, and an integrated display for left-eye and right-eye can be provided.
- the housing portion 130 of the HMD 110 includes a sensor 140 .
- the sensor 140 may comprise, for example, a magnetic sensor, an acceleration sensor, or a gyro sensor, or a combination thereof, to detect actions such as the orientation or tilt of the user's head.
- the axis corresponding to the user's anteroposterior direction is Z-axis, which connects the center of the display panel 120 with the user, and the axis corresponding to the user's left and right direction is X-axis
- the sensor 140 can detect the rotation angle around the X-axis (so-called pitch angle), rotation angle around the Y-axis (so-called yaw angle), and rotation angle around the Z-axis (so-called roll angle).
- the housing portion 130 of the HMD 110 may also include a plurality of light sources 150 (e.g., infrared light LEDs, visible light LEDs).
- a camera e.g., an infrared light camera, a visible light camera
- the HMD 110 may be provided with a camera for detecting a light source installed in the housing portion 130 of the HMD 110 .
- the housing portion 130 of the HMD 110 may also include an eye tracking sensor.
- the eye tracking sensor is used to detect the user's left and right eye gaze directions and gaze.
- FIG. 4 shows a schematic view of the appearance of the controller 210 according to the present embodiment.
- FIG. 6 shows a functional configuration diagram of the controller 210 according to the present embodiment.
- the controller 210 can support the user to make predetermined inputs in the virtual space.
- the controller 210 may be configured as a set of left-hand 220 and right-hand 230 controllers.
- the left hand controller 220 and the right hand controller 230 may each have an operational trigger button 240 , an infrared LED 250 , a sensor 260 , a joystick 270 , and a menu button 280 .
- the operation trigger button 240 is positioned as 240 a, 240 b in a position that is intended to perform an operation to pull the trigger with the middle finger and index finger when gripping the grip 235 of the controller 210 .
- the frame 245 formed in a ring-like fashion downward from both sides of the controller 210 is provided with a plurality of infrared LEDs 250 , and a camera (not shown) provided outside the controller can detect the position, orientation and slope of the controller 210 in a particular space by detecting the position of these infrared LEDs.
- the controller 210 may also incorporate a sensor 260 to detect operations such as the orientation or tilt of the controller 210 .
- sensor 260 it may comprise, for example, a magnetic sensor, an acceleration sensor, or a gyro sensor, or a combination thereof.
- the top surface of the controller 210 may include a joystick 270 and a menu button 280 . It is envisioned that the joystick 270 may be moved in a 360 degree direction centered on the reference point and operated with a thumb when gripping the grip 235 of the controller 210 . Menu buttons 280 are also assumed to be operated with the thumb.
- the controller 210 may include a vibrator (not shown) for providing vibration to the hand of the user operating the controller 210 .
- the controller 210 includes an input/output unit and a communication unit for outputting information such as the position, orientation, and slope of the controller 210 via a button or a joystick, and for receiving information from the host computer.
- the system can determine the user's hand operation and attitude, pseudo-displaying and operating the user's hand in the virtual space.
- FIG. 7 shows a functional configuration diagram of an image producing device 310 according to this embodiment.
- the image producing device 310 may use a device such as a PC, a game machine, or a portable communication terminal having a function for storing information on the user's head operation or the operation or operation of the controller acquired by the user input information or the sensor, which is transmitted from the HMD 110 or the controller 210 , performing a predetermined computational processing, and generating an image.
- the image producing device 310 may include an input/output unit 320 for establishing a wired connection with a peripheral device such as, for example, an HMD 110 or a controller 210 , and a communication unit 330 for establishing a wireless connection such as infrared, Bluetooth, or WiFi (registered trademark).
- the information received from the HMD 110 and/or the controller 210 regarding the operation of the user's head or the operation or operation of the controller is detected in the control unit 340 as input content including the operation of the user's position, line of sight, attitude, speech, operation, etc., through the I/O unit 320 and/or the communication unit 330 .
- the control unit 350 executes a control program stored in the storage unit 350 according to the user's input content, and performs a process such as controlling the character and generating an image.
- the control unit 340 may be composed of a CPU. However, by further providing a GPU specialized for image processing, information processing and image processing can be distributed and overall processing efficiency can be improved.
- the image generating device 310 may also communicate with other computing processing devices to allow other computing processing devices to share information processing and image processing.
- the control unit 340 includes a user input detecting unit 410 that detects information received from the HMD 110 and/or the controller 210 regarding the operation of the user's head, speech of the user, and operation of the controller, a character control unit 420 that executes a control program stored in the control program storage unit for a character stored in the character data storage unit 440 of the storage unit 350 in advance, and an image producing unit 430 that generates an image based on character control.
- control of the operation of the character is realized by converting information such as the direction, inclination, or manual operation of the user head detected through the HMD 110 or the controller 210 into the operation of each part of the bone structure created in accordance with the movement or restriction of the joints of the human body, and applying the operation of the bone structure to the previously stored character data by relating the bone structure.
- control unit 340 includes a recording and playback executing unit 440 for recording and playing back an image-generated character on a track, and an editing executing unit 450 for editing each track and generating the final content.
- the controller 340 includes a recording and reproduction executing unit 460 for recording audio based on a user's speech.
- the storage unit 350 includes a character data storage unit 510 for storing not only image data of a character but also information related to a character such as attributes of a character.
- the control program storage unit 520 stores a program for controlling the operation of a character or an expression in the virtual space.
- the storage unit 350 includes an action data composed of parameters for controlling the movement of a character in a moving image generated by the image producing unit 630 and a track storage unit 530 for storing motion data relating to a user's voice and/or lipsink.
- FIG. 8 is a flowchart illustrating an example of a track generation process according to an embodiment of the present invention.
- the recording and reproduction executing unit 440 of the control unit 340 of the image producing device 310 starts recording for storing action data of the moving image related to operation by the first character in the virtual space in the first track of the track storage unit 530 (S 101 ).
- the position of the camera where the character is to be shot and the viewpoint of the camera e.g., FPV, TPV, etc.
- the position where the camera man 2 is disposed and the angle of the camera 3 can be set with respect to the character 4 corresponding to the first character.
- the recording start operation may be indicated by a remote controller, such as controller 210 , or may be indicated by other terminals.
- the operation may also be performed by a user who is equipped with an HMD 110 to manipulate the controller 210 , to play a character, or by a user other than the user who performs the character.
- the recording process may be automatically started based on detecting an operation by a user who performs the character described below.
- the user input detecting unit 410 of the control unit 340 detects information received from the HMD 110 and/or the controller 210 regarding the operation of the user's head, speech of the user, and operation or operation of the controller (S 102 ). For example, when the user mounting the HMD 110 tilts the head, the sensor 140 provided in the HMD 110 detects the tilt and transmits information about the tilt to the image generating device 310 .
- the image generating device 310 receives information about the operation of the user through the communication unit 330 , and the user input detecting unit 410 detects the operation of the user's head based on the received information.
- the sensor 260 provided in the controller detects the operation and/or operation and transmits information about the operation and/or operation to the image generating device 310 using the controller 210 .
- the image producing device 310 receives information related to the user's controller operation and operation through the communication unit 330 , and the user input detecting unit 410 detects the user's controller operation and operation based on the received information.
- the character control unit 420 of the control unit 340 controls the operation of the first character in the virtual space based on the operation of the detected user (S 103 ). For example, based on the user detecting an operation to tilt the head, the character control unit 420 controls to tilt the head of the first character. Also, based on the fact that the user lifts the controller and detects pressing a predetermined button on the controller, the character control unit 420 controls something while extending the arm of the first character upward. In this manner, the character control unit 420 controls the first character to perform the corresponding operation each time the user input detecting unit 410 detects an operation by a user transmitted from the HMD 110 or the controller 210 .
- the character may be controlled to perform a predetermined performance action without user input, and the action data relating to the predetermined performance action may be stored in a first track, as shown in FIG. 9 ( a ) , or both user action and action data relating to a predetermined operation may be stored.
- the recording and reproduction executing unit 440 confirms whether or not the user receives the instruction to end the recording (S 104 ), and when receiving the instruction to end the recording, completes the recording of the first track related to the first character (S 105 ).
- the recording and reproduction executing unit 440 continues the recording process unless the user receives an instruction to end the recording.
- the recording and reproduction executing unit 440 may perform the process of automatically completing the recording when the operation by the user acting as a character is no longer detected.
- the recording execution unit 460 starts recording for storing in the second track of the track storage unit 530 (S 106 ).
- the audio to be recorded is assumed to have various uses, such as tentatively recording the lyrics of the character, recording the camera work and instruction on the appearance in accordance with the character operation realized by the stored action data.
- a camera viewpoint for a character 4 corresponding to the first character can be set so as to take a bird's eye view from a camera 3 to a TPV viewpoint, and the first track can be played back, and the second track can be recorded as shown in FIG. 9 ( b ) , while checking the operation of the character 4 .
- the recording start operation may be indicated by a remote controller, such as controller 210 , or may be indicated by other terminals.
- the operation may also be performed by a user who performs a character or by a user other than the user who performs the character.
- the recording process may also be initiated automatically based on the detection of speech by the user performing the character.
- the recording and reproduction executing unit 460 of the control unit 340 detects the voice information pertaining to the speech of a user received from the HMD 110 or the microphone (not shown) through the user input detecting unit 410 and records the voice to the second track (S 107 ).
- the user may be the same user as the user performing the first character, or may be a different user.
- the voice input by the user can be received through a sound collecting means such as a microphone that is input to the input/output unit 320 of the image producing device 310 .
- the recording executing unit 460 confirms whether the user has received an instruction to terminate the recording (S 108 ), and when receiving an instruction to terminate the recording, completes the recording of the second track (S 109 ).
- the recording executing unit 460 continues the recording process unless the user receives an instruction to terminate the recording.
- the recording execution unit 440 may perform a process of automatically completing the recording when the operation by the user acting as a character is no longer detected. It is also possible to execute the recording termination process at a predetermined time by activating a timer rather than accepting instructions from the user.
- motion data consisting of parameters relating to the corresponding lip sink (lip movement of the character) can be stored instead of or additionally adding the user's voice to the second track, based on the user's voice.
- motion data related to lip-sink is stored together with the recording, in S 108 and S 109 , it is possible to stop storing the motion data when the voice of the user is no longer detected.
- the user may also transmit the first and second tracks to other user terminals such as voice actors (e.g., user terminals 410 B or 410 C of FIG. 2 ), and other users may further record character audio while referring to the audio of the temporarily recorded characters.
- voice actors e.g., user terminals 410 B or 410 C of FIG. 2
- other users may further record character audio while referring to the audio of the temporarily recorded characters.
- the camera angle of the first track or the behavior of the character may be edited while referring to the instruction content, such as the recorded appearance of the shared second track.
- the animation production system can be used to produce the animation by a so-called agile method.
- agile method it is possible to complete the scene by repeating the events in a short period of time, checking the finish, and taking the pictures again and recording them as needed.
- only a portion of a scene that can be represented can be included in the track and the animation can be finished by stacking the tracks later.
- the creation of a picture contest is omitted, and instead of a picture contest, a director, performer, or scenario writer can manipulate character 4 within virtual space 1 to record on a track and the performer can play while checking the recorded track.
- various cuts can be made to the track in asynchronous audio and visual recordings, thereby facilitating scheduling management.
- the method disclosed in this embodiment may be applied to an object (vehicle, structure, article, etc.) comprising an action, including not only a character, but also a character.
- the HMD 110 may include all or part of the configuration and functions provided by the image producing device 310 .
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- General Engineering & Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Human Computer Interaction (AREA)
- Multimedia (AREA)
- Computer Hardware Design (AREA)
- Health & Medical Sciences (AREA)
- Audiology, Speech & Language Pathology (AREA)
- General Health & Medical Sciences (AREA)
- Processing Or Creating Images (AREA)
- User Interface Of Digital Computer (AREA)
Abstract
The principal invention for solving the above-described problem is an animation production method that provides a virtual space in which a given object is placed, the method comprising: detecting an operation of a user equipped with a head mounted display; controlling a movement of an object based on the detected operation of the user; shooting the movement of the object; storing an action data relating to the movement of the shot object in a first track; and storing audio from the user in a second track.
Description
- The present invention relates to an animation production system.
- Virtual cameras are arranged in a virtual space (see Patent Document 1).
- [PTL 1] Patent Application Publication No. 2017-146651
- No attempt was made to capture animations in the virtual space.
- The present invention has been made in view of such a background, and is intended to provide a technology capable of capturing animations in a virtual space.
- The principal invention for solving the above-described problem is an animation production method that provides a virtual space in which a given object is placed, the method comprising: detecting an operation of a user equipped with a head mounted display; controlling a movement of an object based on the detected operation of the user; shooting the movement of the object; storing an action data relating to the movement of the shot object in a first track; and storing audio from the user in a second track.
- The other problems disclosed in the present application and the method for solving them are clarified in the sections and drawings of the embodiments of the invention.
- According to the present invention, animations can be captured in a virtual space.
-
FIG. 1 is a diagram illustrating an example of a virtual space displayed on a head mount display (HMD) mounted by a user in an animation production system of the present embodiment; -
FIG. 2 is a diagram illustrating an example of the overall configuration of ananimation production system 300 according to an embodiment of the present invention. -
FIG. 3 shows a schematic view of the appearance of a head mount display (hereinafter referred to as an HMD) 110 according to the present embodiment. -
FIG. 4 shows a schematic view of the outside of thecontroller 210 according to the present embodiment. -
FIG. 5 shows a functional configuration diagram of the HMD 110 according to the present embodiment. -
FIG. 6 shows a functional configuration diagram of thecontroller 210 according to the present embodiment. -
FIG. 7 shows a functional configuration diagram of animage producing device 310 according to the present embodiment. -
FIG. 8 is a flow chart illustrating an example of a track generation process according to an embodiment of the present invention. -
FIG. 9(a) is a diagram illustrating a track generation according to an embodiment of the present invention. -
FIG. 9(b) is a diagram illustrating a track generation according to an embodiment of the present invention. - The contents of embodiments of the present invention will be described with reference. An animation production method according to an embodiment of the present invention has the following configuration.
- An animation production method that provides a virtual space in which a given object is placed, the method comprising:
- detecting an operation of a user equipped with a head mounted display;
- controlling a movement of an object based on the detected operation of the user;
- shooting the movement of the object;
- storing an action data relating to the movement of the shot object in a first track; and
- storing audio from the user in a second track.
- The method further comprising sharing the first track or the second track at a plurality of user terminals.
- A specific example of an animation production system according to an embodiment of the present invention will be described below with reference to the drawings. It should be noted that the present invention is not limited to these examples, and is intended to include all modifications within the meaning and scope of equivalence with the appended claims, as indicated by the appended claims. In the following description, the same elements are denoted by the same reference numerals in the description of the drawings and overlapping descriptions are omitted.
-
FIG. 1 is a diagram illustrating an example of a virtual space displayed on a head mount display (HMD) mounted by a user in an animation production system of the present embodiment. In the animation production system of the present embodiment, acharacter 4 and a camera 3 are disposed in thevirtual space 1, and acharacter 4 is shot using the camera 3. In thevirtual space 1, thephotographer 2 is disposed, and the camera 3 is virtually operated by thephotographer 2. In the animation production system of the present embodiment, as shown inFIG. 1 , a user makes an animation by placing acharacter 4 and a camera 3 while viewing thevirtual space 1 from a bird's perspective with a TPV (Third Person's View), taking acharacter 4 with an FPV (First Person View; first person support) as aphotographer 2, and performing acharacter 4 with an FPV. In thevirtual space 1, a plurality of characters (in the example shown inFIG. 1 , acharacter 4 and a character 5) can be disposed, and the user can perform the performance while possessing acharacter 4 and acharacter 5, respectively. That is, in the animation production system of the present embodiment, one can play a number of roles (roles). In addition, since thecamera 2 can be virtually operated as thephotographer 2, natural camera work can be realized and the representation of the movie to be shot can be enriched. -
FIG. 2 is a diagram illustrating an example of the overall configuration of ananimation production system 300 according to an embodiment of the present invention. Theanimation production system 300 may comprise, for example, auser terminal 410A comprising an HMD 110, acontroller 210, and animage generator 310 that functions as a host computer. An infrared camera (not shown) or the like can also be added to theanimation production system 300 for detecting the position, orientation and slope of the HMD 110 orcontroller 210. These devices may be connected to each other by wired or wireless means. For example, each device may be equipped with a USB port to establish communication by cable connection, or communication may be established by wired or wireless, such as HDMI, wired LAN, infrared, Bluetooth™, WiFi™. Theimage generating device 310 may be a PC, a game machine, a portable communication terminal, or any other device having a calculation processing function. Theuser terminal 410A may also connectother user terminals 410B, 410C via a network or the like.Other user terminals 410B, 410C may include an HMD and/or controller as well as auser terminal 410A and may include at least a computer, a game machine, a portable communication terminal, or any other device having a computational processing function. -
FIG. 3 shows a schematic view of the appearance of a head mount display (hereinafter referred to as HMD) 110 according to the present embodiment.FIG. 5 shows a functional configuration diagram of the HMD 110 according to the present embodiment. The HMD 110 is mounted on the user's head and includes adisplay panel 120 for placement in front of the user's left and right eyes. Although an optically transmissive and non-transmissive display is contemplated as the display panel, this embodiment illustrates a non-transmissive display panel that can provide more immersion. Thedisplay panel 120 displays a left-eye image and a right-eye image, which can provide the user with a three-dimensional image by utilizing the visual difference of both eyes. If left- and right-eye images can be displayed, a left-eye display and a right-eye display can be provided separately, and an integrated display for left-eye and right-eye can be provided. - The
housing portion 130 of the HMD 110 includes asensor 140. Thesensor 140 may comprise, for example, a magnetic sensor, an acceleration sensor, or a gyro sensor, or a combination thereof, to detect actions such as the orientation or tilt of the user's head. When the vertical direction of the user's head is Y-axis, the axis corresponding to the user's anteroposterior direction is Z-axis, which connects the center of thedisplay panel 120 with the user, and the axis corresponding to the user's left and right direction is X-axis, thesensor 140 can detect the rotation angle around the X-axis (so-called pitch angle), rotation angle around the Y-axis (so-called yaw angle), and rotation angle around the Z-axis (so-called roll angle). - In place of or in addition to the
sensor 140, thehousing portion 130 of theHMD 110 may also include a plurality of light sources 150 (e.g., infrared light LEDs, visible light LEDs). A camera (e.g., an infrared light camera, a visible light camera) installed outside the HMD 110 (e.g., indoor, etc.) can detect the position, orientation, and tilt of theHMD 110 in a particular space by detecting these light sources. Alternatively, for the same purpose, theHMD 110 may be provided with a camera for detecting a light source installed in thehousing portion 130 of theHMD 110. - The
housing portion 130 of theHMD 110 may also include an eye tracking sensor. The eye tracking sensor is used to detect the user's left and right eye gaze directions and gaze. There are various types of eye tracking sensors. For example, the position of reflected light on the cornea, which can be irradiated with infrared light that is weak in the left eye and right eye, is used as a reference point, the position of the pupil relative to the position of reflected light is used to detect the direction of the eye line, and the intersection point in the direction of the eye line in the left eye and right eye is used as a focus point. -
FIG. 4 shows a schematic view of the appearance of thecontroller 210 according to the present embodiment.FIG. 6 shows a functional configuration diagram of thecontroller 210 according to the present embodiment. Thecontroller 210 can support the user to make predetermined inputs in the virtual space. Thecontroller 210 may be configured as a set of left-hand 220 and right-hand 230 controllers. Theleft hand controller 220 and theright hand controller 230 may each have an operational trigger button 240, aninfrared LED 250, asensor 260, ajoystick 270, and amenu button 280. - The operation trigger button 240 is positioned as 240 a, 240 b in a position that is intended to perform an operation to pull the trigger with the middle finger and index finger when gripping the grip 235 of the
controller 210. Theframe 245 formed in a ring-like fashion downward from both sides of thecontroller 210 is provided with a plurality ofinfrared LEDs 250, and a camera (not shown) provided outside the controller can detect the position, orientation and slope of thecontroller 210 in a particular space by detecting the position of these infrared LEDs. - The
controller 210 may also incorporate asensor 260 to detect operations such as the orientation or tilt of thecontroller 210. Assensor 260, it may comprise, for example, a magnetic sensor, an acceleration sensor, or a gyro sensor, or a combination thereof. Additionally, the top surface of thecontroller 210 may include ajoystick 270 and amenu button 280. It is envisioned that thejoystick 270 may be moved in a 360 degree direction centered on the reference point and operated with a thumb when gripping the grip 235 of thecontroller 210.Menu buttons 280 are also assumed to be operated with the thumb. In addition, thecontroller 210 may include a vibrator (not shown) for providing vibration to the hand of the user operating thecontroller 210. Thecontroller 210 includes an input/output unit and a communication unit for outputting information such as the position, orientation, and slope of thecontroller 210 via a button or a joystick, and for receiving information from the host computer. - With or without the user grasping the
controller 210 and manipulating the various buttons and joysticks, and with information detected by the infrared LEDs and sensors, the system can determine the user's hand operation and attitude, pseudo-displaying and operating the user's hand in the virtual space. -
FIG. 7 shows a functional configuration diagram of animage producing device 310 according to this embodiment. Theimage producing device 310 may use a device such as a PC, a game machine, or a portable communication terminal having a function for storing information on the user's head operation or the operation or operation of the controller acquired by the user input information or the sensor, which is transmitted from theHMD 110 or thecontroller 210, performing a predetermined computational processing, and generating an image. Theimage producing device 310 may include an input/output unit 320 for establishing a wired connection with a peripheral device such as, for example, anHMD 110 or acontroller 210, and acommunication unit 330 for establishing a wireless connection such as infrared, Bluetooth, or WiFi (registered trademark). The information received from theHMD 110 and/or thecontroller 210 regarding the operation of the user's head or the operation or operation of the controller is detected in thecontrol unit 340 as input content including the operation of the user's position, line of sight, attitude, speech, operation, etc., through the I/O unit 320 and/or thecommunication unit 330. Thecontrol unit 350 executes a control program stored in thestorage unit 350 according to the user's input content, and performs a process such as controlling the character and generating an image. Thecontrol unit 340 may be composed of a CPU. However, by further providing a GPU specialized for image processing, information processing and image processing can be distributed and overall processing efficiency can be improved. Theimage generating device 310 may also communicate with other computing processing devices to allow other computing processing devices to share information processing and image processing. - The
control unit 340 includes a userinput detecting unit 410 that detects information received from theHMD 110 and/or thecontroller 210 regarding the operation of the user's head, speech of the user, and operation of the controller, acharacter control unit 420 that executes a control program stored in the control program storage unit for a character stored in the characterdata storage unit 440 of thestorage unit 350 in advance, and animage producing unit 430 that generates an image based on character control. Here, the control of the operation of the character is realized by converting information such as the direction, inclination, or manual operation of the user head detected through theHMD 110 or thecontroller 210 into the operation of each part of the bone structure created in accordance with the movement or restriction of the joints of the human body, and applying the operation of the bone structure to the previously stored character data by relating the bone structure. Further, thecontrol unit 340 includes a recording andplayback executing unit 440 for recording and playing back an image-generated character on a track, and anediting executing unit 450 for editing each track and generating the final content. Further, thecontroller 340 includes a recording andreproduction executing unit 460 for recording audio based on a user's speech. - The
storage unit 350 includes a characterdata storage unit 510 for storing not only image data of a character but also information related to a character such as attributes of a character. The controlprogram storage unit 520 stores a program for controlling the operation of a character or an expression in the virtual space. Thestorage unit 350 includes an action data composed of parameters for controlling the movement of a character in a moving image generated by the image producing unit 630 and atrack storage unit 530 for storing motion data relating to a user's voice and/or lipsink. -
FIG. 8 is a flowchart illustrating an example of a track generation process according to an embodiment of the present invention. - First, the recording and
reproduction executing unit 440 of thecontrol unit 340 of theimage producing device 310 starts recording for storing action data of the moving image related to operation by the first character in the virtual space in the first track of the track storage unit 530 (S101). Here, the position of the camera where the character is to be shot and the viewpoint of the camera (e.g., FPV, TPV, etc.) can be set. For example, in thevirtual space 1 illustrated inFIG. 1 , the position where thecamera man 2 is disposed and the angle of the camera 3 can be set with respect to thecharacter 4 corresponding to the first character. The recording start operation may be indicated by a remote controller, such ascontroller 210, or may be indicated by other terminals. The operation may also be performed by a user who is equipped with anHMD 110 to manipulate thecontroller 210, to play a character, or by a user other than the user who performs the character. In addition, the recording process may be automatically started based on detecting an operation by a user who performs the character described below. - Subsequently, the user
input detecting unit 410 of thecontrol unit 340 detects information received from theHMD 110 and/or thecontroller 210 regarding the operation of the user's head, speech of the user, and operation or operation of the controller (S102). For example, when the user mounting theHMD 110 tilts the head, thesensor 140 provided in theHMD 110 detects the tilt and transmits information about the tilt to theimage generating device 310. Theimage generating device 310 receives information about the operation of the user through thecommunication unit 330, and the userinput detecting unit 410 detects the operation of the user's head based on the received information. Also, when a user performs a predetermined operation or operation, such as lifting thecontroller 210 or pressing a button, thesensor 260 provided in the controller detects the operation and/or operation and transmits information about the operation and/or operation to theimage generating device 310 using thecontroller 210. Theimage producing device 310 receives information related to the user's controller operation and operation through thecommunication unit 330, and the userinput detecting unit 410 detects the user's controller operation and operation based on the received information. - Subsequently, the
character control unit 420 of thecontrol unit 340 controls the operation of the first character in the virtual space based on the operation of the detected user (S103). For example, based on the user detecting an operation to tilt the head, thecharacter control unit 420 controls to tilt the head of the first character. Also, based on the fact that the user lifts the controller and detects pressing a predetermined button on the controller, thecharacter control unit 420 controls something while extending the arm of the first character upward. In this manner, thecharacter control unit 420 controls the first character to perform the corresponding operation each time the userinput detecting unit 410 detects an operation by a user transmitted from theHMD 110 or thecontroller 210. Stores parameters related to the operation and/or operation detected by the userinput detecting unit 410 in the first track of thetrack storage unit 530. Alternatively, the character may be controlled to perform a predetermined performance action without user input, and the action data relating to the predetermined performance action may be stored in a first track, as shown inFIG. 9(a) , or both user action and action data relating to a predetermined operation may be stored. - Subsequently, the recording and
reproduction executing unit 440 confirms whether or not the user receives the instruction to end the recording (S104), and when receiving the instruction to end the recording, completes the recording of the first track related to the first character (S105). The recording andreproduction executing unit 440 continues the recording process unless the user receives an instruction to end the recording. Here, the recording andreproduction executing unit 440 may perform the process of automatically completing the recording when the operation by the user acting as a character is no longer detected. - Subsequently, the
recording execution unit 460 starts recording for storing in the second track of the track storage unit 530 (S106). Here, the audio to be recorded is assumed to have various uses, such as tentatively recording the lyrics of the character, recording the camera work and instruction on the appearance in accordance with the character operation realized by the stored action data. For example, in thevirtual space 1 illustrated inFIG. 1 , a camera viewpoint for acharacter 4 corresponding to the first character can be set so as to take a bird's eye view from a camera 3 to a TPV viewpoint, and the first track can be played back, and the second track can be recorded as shown inFIG. 9(b) , while checking the operation of thecharacter 4. The recording start operation may be indicated by a remote controller, such ascontroller 210, or may be indicated by other terminals. The operation may also be performed by a user who performs a character or by a user other than the user who performs the character. The recording process may also be initiated automatically based on the detection of speech by the user performing the character. - Subsequently, the recording and
reproduction executing unit 460 of thecontrol unit 340 detects the voice information pertaining to the speech of a user received from theHMD 110 or the microphone (not shown) through the userinput detecting unit 410 and records the voice to the second track (S107). Here, the user may be the same user as the user performing the first character, or may be a different user. Alternatively, the voice input by the user can be received through a sound collecting means such as a microphone that is input to the input/output unit 320 of theimage producing device 310. - Subsequently, the
recording executing unit 460 confirms whether the user has received an instruction to terminate the recording (S108), and when receiving an instruction to terminate the recording, completes the recording of the second track (S109). Therecording executing unit 460 continues the recording process unless the user receives an instruction to terminate the recording. Here, therecording execution unit 440 may perform a process of automatically completing the recording when the operation by the user acting as a character is no longer detected. It is also possible to execute the recording termination process at a predetermined time by activating a timer rather than accepting instructions from the user. - Although the procedure of the track generation process has been described with reference to
FIG. 8 , for processing of S107 to S109, motion data consisting of parameters relating to the corresponding lip sink (lip movement of the character) can be stored instead of or additionally adding the user's voice to the second track, based on the user's voice. When the motion data related to lip-sink is stored together with the recording, in S108 and S109, it is possible to stop storing the motion data when the voice of the user is no longer detected. - It is also possible to skip the first track recording of S101 to S106 to execute the recording process of S107 or later. The same applies to storing motion data related to lipsink.
- The user may also transmit the first and second tracks to other user terminals such as voice actors (e.g.,
user terminals 410B or 410C ofFIG. 2 ), and other users may further record character audio while referring to the audio of the temporarily recorded characters. In addition, when the other user is an editor, the camera angle of the first track or the behavior of the character may be edited while referring to the instruction content, such as the recorded appearance of the shared second track. - In addition, the animation production system according to the present embodiment can be used to produce the animation by a so-called agile method. In other words, it is possible to complete the scene by repeating the events in a short period of time, checking the finish, and taking the pictures again and recording them as needed. This makes it possible to produce animations quickly and flexibly while confirming the direction of production. Alternatively, only a portion of a scene that can be represented can be included in the track and the animation can be finished by stacking the tracks later. For example, the creation of a picture contest is omitted, and instead of a picture contest, a director, performer, or scenario writer can manipulate
character 4 withinvirtual space 1 to record on a track and the performer can play while checking the recorded track. For example, it may be easy to retrieve and rectify a contradictory scene. In addition, various cuts can be made to the track in asynchronous audio and visual recordings, thereby facilitating scheduling management. - Although the present embodiment has been described above, the above-described embodiment is intended to facilitate the understanding of the present invention and is not intended to be a limiting interpretation of the present invention. The present invention may be modified and improved without departing from the spirit thereof, and the present invention also includes its equivalent.
- For example, in this embodiment, while a character has been described as an example with respect to a track generation method and an editing method, the method disclosed in this embodiment may be applied to an object (vehicle, structure, article, etc.) comprising an action, including not only a character, but also a character.
- For example, although the
image producing device 310 has been described in this embodiment as separate from theHMD 110, theHMD 110 may include all or part of the configuration and functions provided by theimage producing device 310. - 1 virtual space
- 2 cameraman
- 3 cameras
- 4 characters
- 5 characters
- 110 HMD
- 210 controller
- 310 Image Generator
- 410 User Terminals
Claims (4)
1. (canceled)
2. An animation production method that provides a virtual space in which a given character is placed, the method comprising:
a first user terminal executing steps of:
detecting an operation of a first user equipped with a head mounted display;
controlling a movement of a character based on the detected operation of the first user;
shooting the movement of the character;
storing an action data relating to the movement of the shot character in a first track;
recording an instruction of action of the character based on a voice of the first user in a second track; and
transmitting the first and second track to a second user terminal, and the second user terminal executing steps of:
editing the movement of the character while replaying the instruction in the second track.
3. The method according to claim 2 , the method further comprising sharing the first track or the second track at a plurality of user terminals.
4. The method according to claim 2 , wherein the recording the instruction of the action of the character further includes recording an instruction of camerawork, and the editing the movement of the character further includes editing angle of a camera while replaying the instruction of the camerawork.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
US18/156,866 US20230152884A1 (en) | 2020-07-29 | 2023-01-19 | Animation production system |
Applications Claiming Priority (4)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
JP2020-128297 | 2020-07-29 | ||
JP2020128297A JP2022025464A (en) | 2020-07-29 | 2020-07-29 | Animation creation system |
US17/007,998 US11586278B2 (en) | 2020-07-29 | 2020-08-31 | Animation production system |
US18/156,866 US20230152884A1 (en) | 2020-07-29 | 2023-01-19 | Animation production system |
Related Parent Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
US17/007,998 Continuation US11586278B2 (en) | 2020-07-29 | 2020-08-31 | Animation production system |
Publications (1)
Publication Number | Publication Date |
---|---|
US20230152884A1 true US20230152884A1 (en) | 2023-05-18 |
Family
ID=80004255
Family Applications (2)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
US17/007,998 Active US11586278B2 (en) | 2020-07-29 | 2020-08-31 | Animation production system |
US18/156,866 Abandoned US20230152884A1 (en) | 2020-07-29 | 2023-01-19 | Animation production system |
Family Applications Before (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
US17/007,998 Active US11586278B2 (en) | 2020-07-29 | 2020-08-31 | Animation production system |
Country Status (2)
Country | Link |
---|---|
US (2) | US11586278B2 (en) |
JP (1) | JP2022025464A (en) |
Family Cites Families (10)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP4811381B2 (en) * | 2007-10-10 | 2011-11-09 | ソニー株式会社 | REPRODUCTION DEVICE, REPRODUCTION METHOD, AND PROGRAM |
US8964052B1 (en) * | 2010-07-19 | 2015-02-24 | Lucasfilm Entertainment Company, Ltd. | Controlling a virtual camera |
KR102349716B1 (en) * | 2015-06-11 | 2022-01-11 | 삼성전자주식회사 | Method for sharing images and electronic device performing thereof |
JP2017146651A (en) | 2016-02-15 | 2017-08-24 | 株式会社コロプラ | Image processing method and image processing program |
US10551993B1 (en) * | 2016-05-15 | 2020-02-04 | Google Llc | Virtual reality content development environment |
KR102635437B1 (en) * | 2017-02-28 | 2024-02-13 | 삼성전자주식회사 | Method for contents sharing and electronic device supporting the same |
JP2018187298A (en) * | 2017-05-11 | 2018-11-29 | グリー株式会社 | Game processing program, game processing method, and game processor |
KR101918853B1 (en) * | 2017-06-28 | 2018-11-15 | 민코넷주식회사 | System for Generating Game Replay Video |
US10497182B2 (en) * | 2017-10-03 | 2019-12-03 | Blueprint Reality Inc. | Mixed reality cinematography using remote activity stations |
JP2022025465A (en) * | 2020-07-29 | 2022-02-10 | 株式会社AniCast RM | Animation creation system |
-
2020
- 2020-07-29 JP JP2020128297A patent/JP2022025464A/en active Pending
- 2020-08-31 US US17/007,998 patent/US11586278B2/en active Active
-
2023
- 2023-01-19 US US18/156,866 patent/US20230152884A1/en not_active Abandoned
Also Published As
Publication number | Publication date |
---|---|
US20220035446A1 (en) | 2022-02-03 |
JP2022025464A (en) | 2022-02-10 |
US11586278B2 (en) | 2023-02-21 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
US20220351442A1 (en) | Animation production system | |
JP6964302B2 (en) | Animation production method | |
US20220301249A1 (en) | Animation production system for objects in a virtual space | |
US20230121976A1 (en) | Animation production system | |
US11321898B2 (en) | Animation production system | |
US11586278B2 (en) | Animation production system | |
JP2022153478A (en) | Animation creation system | |
US20220351450A1 (en) | Animation production system | |
JP2022153479A (en) | Animation creation system | |
US20220351438A1 (en) | Animation production system | |
US20220351443A1 (en) | Animation production system | |
US20220351437A1 (en) | Animation production system | |
US20220351445A1 (en) | Animation production system | |
US20220351440A1 (en) | Animation production system |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
STPP | Information on status: patent application and granting procedure in general |
Free format text: NOTICE OF ALLOWANCE MAILED -- APPLICATION RECEIVED IN OFFICE OF PUBLICATIONS |
|
STCB | Information on status: application discontinuation |
Free format text: ABANDONED -- FAILURE TO PAY ISSUE FEE |