US20230412897A1 - Video distribution system for live distributing video containing animation of character object generated based on motion of actors - Google Patents
Video distribution system for live distributing video containing animation of character object generated based on motion of actors Download PDFInfo
- Publication number
- US20230412897A1 US20230412897A1 US18/457,056 US202318457056A US2023412897A1 US 20230412897 A1 US20230412897 A1 US 20230412897A1 US 202318457056 A US202318457056 A US 202318457056A US 2023412897 A1 US2023412897 A1 US 2023412897A1
- Authority
- US
- United States
- Prior art keywords
- video
- actor
- display
- character object
- user
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
- 230000033001 locomotion Effects 0.000 title claims abstract description 150
- 238000009826 distribution Methods 0.000 title claims abstract description 131
- 238000000034 method Methods 0.000 claims description 64
- 238000003860 storage Methods 0.000 claims description 26
- 230000004044 response Effects 0.000 claims description 14
- 230000008569 process Effects 0.000 description 54
- 230000000694 effects Effects 0.000 description 26
- 230000008921 facial expression Effects 0.000 description 26
- 238000012545 processing Methods 0.000 description 24
- 210000003128 head Anatomy 0.000 description 17
- 230000008859 change Effects 0.000 description 12
- 230000006870 function Effects 0.000 description 11
- 238000005034 decoration Methods 0.000 description 9
- 238000004891 communication Methods 0.000 description 7
- 238000001514 detection method Methods 0.000 description 7
- 230000014509 gene expression Effects 0.000 description 5
- 238000009877 rendering Methods 0.000 description 5
- 230000003287 optical effect Effects 0.000 description 4
- 230000004048 modification Effects 0.000 description 3
- 238000012986 modification Methods 0.000 description 3
- 239000002131 composite material Substances 0.000 description 2
- 238000012790 confirmation Methods 0.000 description 2
- 230000006866 deterioration Effects 0.000 description 2
- 238000010586 diagram Methods 0.000 description 2
- 230000001815 facial effect Effects 0.000 description 2
- 239000011521 glass Substances 0.000 description 2
- 230000010365 information processing Effects 0.000 description 2
- 238000009434 installation Methods 0.000 description 2
- 238000004519 manufacturing process Methods 0.000 description 2
- 230000007246 mechanism Effects 0.000 description 2
- 238000012544 monitoring process Methods 0.000 description 2
- 238000005070 sampling Methods 0.000 description 2
- 239000003381 stabilizer Substances 0.000 description 2
- 210000000707 wrist Anatomy 0.000 description 2
- 241001465754 Metazoa Species 0.000 description 1
- 230000009471 action Effects 0.000 description 1
- 239000002390 adhesive tape Substances 0.000 description 1
- 230000002411 adverse Effects 0.000 description 1
- 230000006399 behavior Effects 0.000 description 1
- 230000008901 benefit Effects 0.000 description 1
- 210000000988 bone and bone Anatomy 0.000 description 1
- 238000004590 computer program Methods 0.000 description 1
- 239000000470 constituent Substances 0.000 description 1
- 210000003414 extremity Anatomy 0.000 description 1
- 238000003384 imaging method Methods 0.000 description 1
- 230000006872 improvement Effects 0.000 description 1
- 230000003993 interaction Effects 0.000 description 1
- 239000004973 liquid crystal related substance Substances 0.000 description 1
- 239000003550 marker Substances 0.000 description 1
- 239000004065 semiconductor Substances 0.000 description 1
- 230000001360 synchronised effect Effects 0.000 description 1
- 230000000007 visual effect Effects 0.000 description 1
Images
Classifications
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N21/00—Selective content distribution, e.g. interactive television or video on demand [VOD]
- H04N21/40—Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
- H04N21/41—Structure of client; Structure of client peripherals
- H04N21/414—Specialised client platforms, e.g. receiver in car or embedded in a mobile appliance
- H04N21/41407—Specialised client platforms, e.g. receiver in car or embedded in a mobile appliance embedded in a portable device, e.g. video client on a mobile phone, PDA, laptop
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N21/00—Selective content distribution, e.g. interactive television or video on demand [VOD]
- H04N21/80—Generation or processing of content or additional data by content creator independently of the distribution process; Content per se
- H04N21/81—Monomedia components thereof
- H04N21/8146—Monomedia components thereof involving graphical data, e.g. 3D object, 2D graphics
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T13/00—Animation
- G06T13/20—3D [Three Dimensional] animation
- G06T13/40—3D [Three Dimensional] animation of characters, e.g. humans, animals or virtual beings
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/20—Analysis of motion
-
- G—PHYSICS
- G11—INFORMATION STORAGE
- G11B—INFORMATION STORAGE BASED ON RELATIVE MOVEMENT BETWEEN RECORD CARRIER AND TRANSDUCER
- G11B27/00—Editing; Indexing; Addressing; Timing or synchronising; Monitoring; Measuring tape travel
- G11B27/02—Editing, e.g. varying the order of information signals recorded on, or reproduced from, record carriers
- G11B27/031—Electronic editing of digitised analogue information signals, e.g. audio or video signals
- G11B27/036—Insert-editing
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N21/00—Selective content distribution, e.g. interactive television or video on demand [VOD]
- H04N21/20—Servers specifically adapted for the distribution of content, e.g. VOD servers; Operations thereof
- H04N21/21—Server components or server architectures
- H04N21/218—Source of audio or video content, e.g. local disk arrays
- H04N21/2187—Live feed
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N21/00—Selective content distribution, e.g. interactive television or video on demand [VOD]
- H04N21/20—Servers specifically adapted for the distribution of content, e.g. VOD servers; Operations thereof
- H04N21/23—Processing of content or additional data; Elementary server operations; Server middleware
- H04N21/234—Processing of video elementary streams, e.g. splicing of video streams or manipulating encoded video stream scene graphs
- H04N21/23418—Processing of video elementary streams, e.g. splicing of video streams or manipulating encoded video stream scene graphs involving operations for analysing video streams, e.g. detecting features or characteristics
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N21/00—Selective content distribution, e.g. interactive television or video on demand [VOD]
- H04N21/20—Servers specifically adapted for the distribution of content, e.g. VOD servers; Operations thereof
- H04N21/25—Management operations performed by the server for facilitating the content distribution or administrating data related to end-users or client devices, e.g. end-user or client device authentication, learning user preferences for recommending movies
- H04N21/266—Channel or content management, e.g. generation and management of keys and entitlement messages in a conditional access system, merging a VOD unicast channel into a multicast channel
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N21/00—Selective content distribution, e.g. interactive television or video on demand [VOD]
- H04N21/80—Generation or processing of content or additional data by content creator independently of the distribution process; Content per se
- H04N21/85—Assembly of content; Generation of multimedia applications
- H04N21/854—Content authoring
Definitions
- the present disclosure relates to a video distribution system for live distribution of a video containing animation of a character object generated based on motions of an actor.
- Video distribution systems that produce an animation of a character object based on actor's motions and live distribute a video containing the animation of the character object have been known. Such a video distribution system is disclosed, for example, in Japanese Patent Application Publication No. 2015-184689 (“the '689 Publication”).
- a live distribution of a video unexpected situations such as a failure of a device that is used to generate animation may occur.
- the live distribution of the video is carried out only by an actor and an operator of a camera. If the actor or the camera operator has to handle such an unexpected situation during the live distribution, his/her motions to work on the failure or the like may be reflected in motions of a character object, or inappropriate camerawork may occur. Consequently, the quality of the live distributed video may be adversely affected.
- a video distribution system includes a distribution server configured to live distribute a video that contains an animation of a character object generated based on actor's motions to a first client device used by a first user, a first display device disposed at a position viewable by the actor and displaying the video, and a supporter computer causing first additional information to be displayed in the video displayed on the first display device based on a first operation input while causing the first additional information to be undisplayed on the first client device.
- the supporter computer may cause second additional information to be displayed in the video displayed on the first display device and in the video displayed on the first client device based on a second operation input.
- the supporter computer may cause a modified character object in which at least a part of the character object is modified to be generated based on a third operation input and cause the modified character object to be included in the video.
- the character object may have a face portion that is generated so as to move based on face motion data representing face motions of the actor, and the distribution server device may display third additional information in the face portion based on the third operation input.
- the video distribution system may further include a storage storing a decorative object displayed in association with the character object, and the supporter computer may display a blind object for hiding at least a part of the character object when the decorative object is displayed in the video based on a forth operation input.
- the distribution server device may be configured to live distribute the video to a second client device used by a second user in addition to the first client device, the first client device is a first type device, the second client device is a second type device different from the first type device, and the supporter computer may include a second display device that displays a display image of the video displayed on the first client device and a display image of the video displayed on the second client device.
- the supporter computer may prohibit distribution of the video to the first user based on a fifth operation input.
- Another aspect of the invention relates to a method of distributing a video performed by executing computer readable instructions by one or more computer processor.
- the method includes live distributing a video that contains an animation of a character object generated based on actor's motions to a first client device used by a first user, displaying the video on a first display device disposed at a position viewable by the actor, and displaying first additional information in the video displayed on the first display device based on a first operation input while causing the first additional information to be undisplayed on the first client device.
- Yet another aspect of the invention relates to a computer-readable tangible non-transitory storage medium including a program executed by one or more computer processors.
- the computer program causes the one or more computer processors to perform live distributing a video that contains an animation of a character object generated based on actor's motions to a first client device used by a first user, displaying the video on a first display device disposed at a position viewable by the actor, and displaying first additional information in the video displayed on the first display device based on a first operation input while causing the first additional information to be undisplayed on the first client device.
- FIG. 1 is a block diagram illustrating a video distribution system according to one embodiment.
- FIG. 2 schematically illustrates a installation of a studio where production of a video that is distributed in the video distribution system of FIG. 1 is performed.
- FIG. 3 illustrates a possession list stored in the video distribution system of FIG. 1 .
- FIG. 4 illustrates a candidate list stored in the video distribution system of FIG. 1 .
- FIG. 5 illustrates an example of a video displayed on the client device 10 a in one embodiment.
- An animation of a character object is included in FIG. 5 .
- FIG. 6 illustrates an example of a video displayed on the client device 10 a in one embodiment.
- a normal object is included in FIG. 6 .
- FIG. 7 illustrates an example of a video displayed on the client device 10 a in one embodiment.
- a decorative object is included in FIG. 7 .
- FIG. 8 schematically illustrates an example of a decorative object selection screen for selecting a desired decorative object from among the decorative objects included in the candidate list.
- FIG. 9 illustrates an example of an image displayed on a display 44 of a supporter computer 40 in one embodiment.
- FIG. 10 illustrates an example of an image displayed on a display 39 of a supporter computer 40 and an example of first additional information in one embodiment.
- FIG. 11 a illustrates an example of a video displayed on the client device 10 a in one embodiment, and FIG. 11 a includes second additional information.
- FIG. 11 b illustrates an example of the video displayed on the client device 10 a in one embodiment, and FIG. 11 b includes the second additional information.
- FIG. 12 a illustrates an example of a video displayed on the client device 10 a in one embodiment, and FIG. 12 a includes third additional information.
- FIG. 12 b illustrates an example of the video displayed on the client device 10 a in one embodiment, and FIG. 12 b includes third additional information.
- FIG. 13 illustrates an example of a video displayed on the client device 10 a in one embodiment, and FIG. 13 includes a blind object.
- FIG. 14 is a flow chart showing a flow of a video distribution process in one embodiment.
- FIG. 15 is a flowchart of a process of displaying the first additional information in one embodiment.
- FIG. 16 is a flowchart of a process of displaying the second additional information in one embodiment.
- FIG. 17 is a flowchart of a process of displaying the third additional information in one embodiment.
- FIG. 18 is a flowchart of a process of displaying a blind object in one embodiment.
- FIG. 19 is a flow chart showing a flow of a process for prohibiting video distribution to a banned user in one embodiment.
- FIG. 1 is a block diagram illustrating a video distribution system 1 according to one embodiment
- FIG. 2 schematically illustrates an installation of a studio where a video to be distributed in the video distribution system 1 is produced
- FIGS. 3 to 4 are for describing information stored in the video distribution system 1 .
- the video distribution system 1 includes client devices 10 a to 10 c , a server device 20 , a studio unit 30 , and a storage 60 .
- the client devices 10 a to 10 c , the server device 20 , and the storage 60 are communicably interconnected over a network 50 .
- the server device 20 is configured to distribute a video including an animation of a character, as described later.
- the character included in the video may be motion controlled in a virtual space.
- the video may be distributed from the server device 20 to each of the client devices 10 a to 10 c .
- a first viewing user who is a user of the client device 10 a , a second viewing user who is a user of the client device 10 b , and a third viewing user who is a user of the client device 10 c are able to view the distributed video with their client device.
- the video distribution system 1 may include less than three client devices, or may include more than three client devices. In this specification, when it is not necessary to distinguish the client devices 10 a to 10 c and other client devices from each other, they may be collectively referred to as “client devices.”
- the client devices 10 a to 10 c are information processing devices such as smartphones.
- the client devices 10 a to 10 c each may be a mobile phone, a tablet, a personal computer, an electronic book reader, a wearable computer, a game console, or any other information processing devices that are capable of playing videos.
- Each of the client devices 10 a to 10 c may include a computer processor, a memory unit, a communication I/F, a display, a sensor unit including various sensors such as a gyro sensor, a sound collecting device such as a microphone, and a storage for storing various information.
- Viewing users are able to input a message regarding the distributed video via an input interface of the client devices 10 a to 10 c to post the message to the server device 20 .
- the message posted from each viewing user may be displayed such that it is superimposed on the video. In this way, interaction between viewing users is achieved.
- the server device 20 includes a computer processor 21 , a communication I/F 22 , and a storage 23 .
- the computer processor 21 is a computing device which loads various programs realizing an operating system and various functions from the storage 23 or other storage into a memory unit and executes instructions included in the loaded programs.
- the computer processor 21 is, for example, a CPU, an MPU, a DSP, a GPU, any other computing device, or a combination thereof.
- the computer processor 21 may be realized by means of an integrated circuit such as ASIC, PLD, FPGA, MCU, or the like. Although the computer processor 21 is illustrated as a single component in FIG. 1 , the computer processor 21 may be a collection of a plurality of physically separate computer processors.
- a program or instructions included in the program that are described as being executed by the computer processor 21 may be executed by a single computer processor or distributed and executed by a plurality of computer processors. Further, a program or instructions included in the program executed by the computer processor 21 may be executed by a plurality of virtual computer processors.
- the communication I/F 22 may be implemented as hardware, firmware, or communication software such as a TCP/IP driver or a PPP driver, or a combination thereof.
- the server device 20 is able to transmit and receive data to and from other devices via the communication I/F 22 .
- the storage 23 is an storage device accessed by the computer processor 21 .
- the storage 23 is, for example, a magnetic disk, an optical disk, a semiconductor memory, or various other storage device capable of storing data.
- Various programs may be stored in the storage 23 . At least some of the programs and various data that may be stored in the storage 23 may be stored in a storage (for example, a storage 60 ) that is physically separated from the server device 20 .
- Most of components of the studio unit 30 are disposed, for example, in a studio room R shown in FIG. 2 .
- an actor A 1 and an actor A 2 give performances in the studio room R.
- the studio unit 30 is configured to detect motions and expressions of the actor A 1 and the actor A 2 , and to output the detection result information to the server device 20 .
- Both the actor A 1 and the actor A 2 are objects whose motions and expressions are captured by a sensor group provided in the studio unit 30 , which will be described later.
- the actor A 1 and the actor A 2 are, for example, humans, animals, or moving objects that give performances.
- the actor A 1 and the actor A 2 may be, for example, autonomous robots.
- the number of actors in the studio room R may be one or three or more.
- the studio unit 30 includes six motion sensors 31 a to 31 f attached to the actor A 1 , a controller 33 a held by the left hand of the actor A 1 , a controller 33 b held by the right hand of the actor A 1 , and a camera 37 a attached to the head of the actor A 1 via an attachment 37 b .
- the studio unit 30 also includes six motion sensors 32 a to 32 f attached to the actor A 2 , a controller 34 a held by the left hand of the actor A 2 , a controller 34 b held by the right hand of the actor A 2 , and a camera 38 a attached to the head of the actor A 2 via an attachment 38 b .
- a microphone for collecting audio data may be provided to each of the attachment 37 b and the attachment 38 b .
- the microphone can collect speeches of the actor A 1 and the actor A 2 as voice data.
- the microphones may be wearable microphones attached to the actor A 1 and the actor A 2 via the attachment 37 b and the attachment 38 b .
- the microphones may be installed on the floor, wall or ceiling of the studio room R.
- the studio unit 30 includes a base station 35 a , a base station 35 b , a tracking sensor 36 a , a tracking sensor 36 b , and a display 39 .
- a supporter computer 40 is installed in a room next to the studio room R, and these two rooms are separated from each other by a glass window.
- the server device 20 may be installed in the same room as the room in which the supporter computer 40 is installed.
- the motion sensors 31 a to 31 f and the motion sensors 32 a to 32 f cooperate with the base station 35 a and the base station 35 b to detect their position and orientation.
- the base station 35 a and the base station 35 b are multi-axis laser emitters.
- the base station 35 a emits flashing light for synchronization and then emits a laser beam about, for example, a vertical axis for scanning.
- the base station 35 a emits a laser beam about, for example, a horizontal axis for scanning.
- Each of the motion sensors 31 a to 31 f and the motion sensors 32 a to 32 f may be provided with a plurality of optical sensors for detecting incidence of the flashing lights and the laser beams from the base station 35 a and the base station 35 b , respectively.
- the motion sensors 31 a to 31 f and the motion sensors 32 a to 32 f each may detect a time difference between an incident timing of the flashing light and an incident timing of the laser beam, time when each optical sensor receives the light and or beam, an incident angle of the laser light detected by each optical sensor, and any other information as necessary.
- the motion sensors 31 a to 31 f and the motion sensors 32 a to 32 f may be, for example, Vive Trackers provided by HTC CORPORATION.
- the base station 35 a and the base station 35 b may be, for example, base stations provided by HTC CORPORATION. Three or more base stations may be provided. The position of the base station may be changed as appropriate. For example, in addition to or instead of the base station disposed at the upper corner of the space to be detected by the tracking sensor, a pair of the base stations may be disposed at a upper position and a lower position close to the floor.
- Detection information (hereinafter may also be referred to as “tracking information”) obtained as a result of detection performed by each of the motion sensors 31 a to 31 f and the motion sensors 32 a to 32 f is transmitted to the server device 20 .
- the detection result information may be wirelessly transmitted to the server device 20 from each of the motion sensors 31 a to 31 f and the motion sensors 32 a to 32 f . Since the base station 35 a and the base station 35 b emit flashing light and a laser light for scanning at regular intervals, the tracking information of each motion sensor is updated at each interval.
- the position and the orientation of each of the motion sensors 31 a to 31 f and the motion sensors 32 a to 32 f may be calculated based on the tracking information detected by the motion sensors 31 a to 31 f and the motion sensors 32 a to 32 f .
- the position and the orientation of each of the motion sensors 31 a to 31 f and the motion sensors 32 a to 32 f may be calculated based on, for example, the tracking information in the server device 20 .
- the six motion sensors 31 a to 31 f are mounted on the actor A.
- the motion sensors 31 a , 31 b , 31 c , 31 d , 31 e , and 31 f are attached to the left wrist, the right wrist, the left instep, the right instep, the hip, and top of the head of the actor A 1 , respectively.
- the motion sensors 31 a to 31 f may each be attached to the actor A 1 via an attachment.
- the six motion sensors 32 a to 32 f are mounted on the actor A 2 .
- the motion sensors 32 a to 32 f may be attached to the actor A 2 at the same positions as the motion sensors 31 a to 31 f .
- the motion sensors 31 a to 31 f and the motion sensors 32 a to 32 f shown in FIG. 2 are merely an example.
- the motion sensors 31 a to 31 f may be attached to various parts of the body of the actor A 1
- the motion sensors 32 a to 32 f may be attached to various parts of the body of the actor A 2 .
- the number of motion sensors attached to the actor A 1 and the actor A 2 may be less than or more than six.
- body movements of the actor A 1 and the actor A 2 are detected by detecting the position and the orientation of the motion sensors 31 a to 31 f and the motion sensors 32 a to 32 f attached to the body parts of the actor A 1 and the actor A 2 .
- a plurality of infrared LEDs are mounted on each of the motion sensors attached to the actor A 1 and the actor A 2 , and light from the infrared LEDs are sensed by infrared cameras provided on the floor and/or wall of the studio room R to detect the position and the orientation of each of the motion sensors.
- Visible light LEDs may be used instead of the infrared LEDs, and in this case light from the visible light LEDs may be sensed by visible light cameras to detect the position and the orientation of each of the motion sensors.
- a light emitting unit for example, the infrared LED or visible light LED
- a light receiving unit for example, the infrared camera or visible light camera
- a plurality of reflective markers may be used instead of the motion sensors 31 a - 31 f and the motion sensors 32 a - 32 f .
- the reflective markers may be attached to the actor A 1 and the actor A 2 using an adhesive tape or the like.
- the position and orientation of each reflective marker can be estimated by capturing images of the actor A 1 and the actor A 2 to which the reflective markers are attached to generate captured image data and performing image processing on the captured image data.
- the controller 33 a and the controller 33 b supply, to the server device 20 , control signals that correspond to operation of the actor A 1 .
- the controller 34 a and the controller 34 b supply, to the server device 20 , control signals that correspond to operation of the actor A 2 .
- the tracking sensor 36 a and the tracking sensor 36 b generate tracking information for determining configuration information of a virtual camera used for constructing a virtual space included in the video.
- the tracking information of the tracking sensor 36 a and the tracking sensor 36 b is calculated as the position in its three-dimensional orthogonal coordinate system and the angle around each axis.
- the position and orientation of the tracking sensor 36 a may be changed according to operation of the operator.
- the tracking sensor 36 a transmits the tracking information indicating the position and the orientation of the tracking sensor 36 a to the tracking information server device 20 .
- the position and the orientation of the tracking sensor 36 b may be set according to operation of the operator.
- the tracking sensor 36 b transmits the tracking information indicating the position and the orientation of the tracking sensor 36 b to the tracking information server device 20 .
- the tracking sensors 36 a , 36 b each may be supported by a gimbal or stabilizer.
- the gimbal may be configured to be graspable by an actor or a supporter.
- the camera 37 a is attached to the head of the actor A 1 as described above.
- the camera 37 a is disposed so as to capture an image of the face of the actor A 1 .
- the camera 37 a continuously captures images of the face of the actor A 1 to obtain imaging data of the face of the actor A 1 .
- the camera 38 a is attached to the head of the actor A 2 .
- the camera 38 a is disposed so as to capture an image of the face of the actor A 2 and continuously capture images of the face of the actor A 2 to obtain captured image data of the face of the actor A 2 .
- the camera 37 a transmits the captured image data of the face of the actor A 1 to the server device 20
- the camera 38 a transmits the captured image data of the face of the actor A 2 to the server device 20
- the camera 37 a and the camera 38 a may be 3D cameras capable of detecting the depth of a face of a person.
- the display 39 is configured to display information received from the supporter computer 40 .
- the information transmitted from the supporter computer 40 to the display 39 may include, for example, text information, image information, and various other information.
- the display 39 is disposed at a position where the actor A 1 and the actor A 2 are able to see the display 39 .
- the display 39 is an example of a first display device.
- the supporter computer 40 is installed in the next room of the studio room R. Since the room in which the supporter computer 40 is installed and the studio room R are separated by the glass window, an operator of the supporter computer 40 (sometimes referred to as “supporter” in the specification) is able to see the actor A 1 and the actor A 2 .
- supporters B 1 and B 2 are present in the room as the operators of the supporter computer 40 .
- the supporter B 1 and the supporter B 2 may be collectively referred to as the “supporter” when it is not necessary to distinguish them from each other.
- the supporter computer 40 may be configured to be capable of changing the setting(s) of the component(s) of the studio unit 30 according to the operation by the supporter B 1 and the supporter B 2 .
- the supporter computer 40 can change, for example, the setting of the scanning interval performed by the base station 35 a and the base station 35 b , the position or orientation of the tracking sensor 36 a and the tracking sensor 36 b , and various settings of other devices.
- At least one of the supporter B 1 and the supporter B 2 is able to input a message to the supporter computer 40 , and the input message is displayed on the display 39 .
- the components and functions of the studio unit 30 shown in FIG. 2 are merely example.
- the studio unit 30 applicable to the invention may include various constituent elements that are not shown.
- the studio unit 30 may include a projector.
- the projector is able to project a video distributed to the client device 10 a or another client device on the screen S.
- the storage 23 stores model data 23 a , object data 23 b , a possession list 23 c , a candidate list 23 d , and any other information required for generation and distribution of a video to be distributed.
- the model data 23 a is model data for generating animation of a character.
- the model data 23 a may be three-dimensional model data for generating three-dimensional animation, or may be two-dimensional model data for generating two-dimensional animation.
- the model data 23 a includes, for example, rig data (also referred to as “skeleton data”) indicating a skeleton of a character, and surface data indicating the shape or texture of a surface of the character.
- the model data 23 a may include two or more different pieces of model data. Pieces of model data may each have different rig data, or may have the same rig data. The pieces of model data may have surface data different from each other or may have the same surface data.
- the model data 23 a in order to generate a character object corresponding to the actor A 1 and a character object corresponding to the actor A 2 , the model data 23 a includes at least two types of model data different from each other.
- the model data for the character object corresponding to the actor A 1 and the model data for the character object corresponding to the actor A 2 may have, for example, the same rig data but different surface data from each other.
- the object data 23 b includes asset data used for constructing a virtual space in the video.
- the object data 23 b includes data for rendering a background of the virtual space in the video, data for rendering various objects displayed in the video, and data for rendering any other objects displayed in the video.
- the object data 23 a may include object position information indicating the position of an object in the virtual space.
- the object data 23 b may include a gift object displayed in the video in response to a display request from viewing users of the client devices 10 a to 10 c .
- the gift object may include an effect object, a normal object, and a decorative object. Viewing users are able to purchase a desired gift object.
- an upper limit may be set for the number of objects that a viewing user is allowed to purchase or the amount of money that the viewing user is allowed to spend for objects.
- the effect object is an object that affects the impression of the entire viewing screen of the distributed video, and is, for example, an object representing confetti.
- the object representing confetti may be displayed on the entire viewing screen, which can change the impression of the entire viewing screen before and after its display.
- the effect object may be displayed so as to overlap with the character object, but it is different from the decorative object in that it is not displayed in association with a specific portion of the character object.
- the normal object is an object functioning as a digital gift from a viewing user (for example, the actor A 1 or the actor A 2 ) to an actor, for example, an object resembling a stuffed toy or a bouquet.
- the normal object is displayed on the display screen of the video such that it does not contact the character object.
- the normal object is displayed on the display screen of the video such that it does not overlap with the character object.
- the normal object may be displayed in the virtual space such that it overlaps with an object other than the character object.
- the normal object may be displayed so as to overlap with the character object, but it is different from the decorative object in that it is not displayed in association with a specific portion of the character object.
- the normal object when the normal object is displayed such that it overlaps with the character object, the normal object may hide portions of the character object other than the head including the face of the character object but does not hide the head of the character object.
- the decorative object is an object displayed on the display screen in association with a specific part of the character object.
- the decorative object displayed on the display screen in association with a specific part of the character object is displayed adjacent to the specific part of the character object on the display screen.
- the decorative object displayed on the display screen in association with a specific part of the character object is displayed such that it partially or entirely covers the specific part of the character object on the display screen.
- the decorative object is an object that can be attached to a character object, for example, an accessory (such as a headband, a necklace, an earring, etc.), clothes (such as a T-shirt), a costume, and any other object which can be attached to the character object.
- the object data 23 b corresponding to the decorative object may include attachment position information indicating which part of the character object the decorative object is associated with.
- the attachment position information of a decorative object may indicate to which part of the character object the decorative object is attached. For example, when the decorative object is a headband, the attachment position information of the decorative object may indicate that the decorative object is attached to the “head” of the character object. When the decorative object is a T-shirt, the attachment position information of the decorative object may indicate that the decorative object is attached to the “torso” of the character object.
- a duration of time of displaying the gift objects may be set for each gift object depending on its type.
- the duration of displaying the decorative object may be set longer than the duration of displaying the effect object and the duration of displaying the normal object.
- the duration of displaying the decorative object may be set to 60 seconds, while the duration of displaying the effect object may be set to five seconds and the duration of displaying the normal object may be set to ten seconds.
- the possession list 23 c is a list showing gift objects possessed by viewing users of a video.
- An example of the possession list 23 c is shown in FIG. 3 .
- an object ID for identifying a gift object possessed by a viewing user is stored in association with account information of the viewing user (for example, user ID of the viewing user).
- the viewing users include, for example, the first to third viewing users of the client devices 10 a to 10 c.
- the candidate list 23 d is a list of decorative objects for which a display request has been made from a viewing user. As will be described later, a viewing user who holds a decorative object(s) is able to make a request to display his/her possessed decorative objects.
- object IDs for identifying decorative objects are stored in association with the account information of the viewing user who has made a request to display the decorative objects.
- the candidate list 23 d may be created for each distributor.
- the candidate list 23 d may be stored, for example, in association with distributor identification information that identify a distributor(s) (the actor A 1 , the actor A 2 , the supporter B 1 , and/or the supporter B 2 ).
- the computer processor 21 functions as a body motion data generation unit 21 a , a face motion data generation unit 21 b , an animation generation unit 21 c , a video generation unit 21 d , a video distribution unit 21 e , a display request processing unit 21 f , a decorative object selection unit 21 g , and an object purchase processing unit 21 h by executing computer-readable instructions included in a distributed program.
- the computer processor 21 may be realized by a computer processor other than the computer processor 21 of the video distribution system 1 .
- at least some of the functions realized by the computer processor 21 may be realized by a computer processor mounted on the supporter computer 40 .
- the body motion data generation unit 21 a generates first body motion data of each part of the body of the actor A 1 based on the tracking information obtained through detection performed by the corresponding motion sensors 31 a to 31 f , and generates second body motion data, which is a digital representation of the position and the orientation of each part of the body of the actor A 2 , based on the tracking information obtained through detection performed by the corresponding motion sensors 32 a to 32 f .
- the first body motion data and the second body motion data may be collectively referred to simply as “body motion data.”
- the body motion data is serially generated with time as needed. For example, the body motion data may be generated at predetermined sampling time intervals.
- the body motion data can represent body movements of the actor A 1 and the actor A 2 in time series as digital data.
- the motion sensors 31 a to 31 f and the motion sensors 32 a to 32 f are attached to the left and right limbs, the waist, and the head of the actor A 1 and the actor A 2 , respectively.
- Based on the tracking information obtained through detection performed by the motion sensors 31 a to 31 f and the motion sensors 32 a to 32 f it is possible to digitally represent the position and orientation of the substantially whole body of the actor A 1 and the actor A 2 in time series.
- the body motion data can define, for example, the position and rotation angle of bones corresponding to the rig data included in the model data 23 a.
- the face motion data generation unit 21 b generates first face motion data, which is a digital representation of motions of the face of the actor A 1 , based on captured image data of the camera 37 a , and generates second face motion data, which is a digital representation of motions of the face of the actor A 2 , based on captured image data of the camera 38 a .
- first face motion data and the second face motion data may be collectively referred to simply as “face motion data.”
- the face motion data is serially generated with time as needed.
- the face motion data may be generated at predetermined sampling time intervals.
- the face motion data can digitally represent facial motions (changes in facial expression) of the actor A 1 and the actor A 2 in time series.
- the animation generation unit 21 c is configured to apply the body motion data generated by the body motion data generation unit 21 a and the face motion data generated by the face motion data generation unit 21 b to predetermined model data included in the model data 23 a in order to generate an animation of a character object that moves in a virtual space and whose facial expression changes. More specifically, the animation generation unit 21 c may generate an animation of a character object moving in synchronization with the motions of the body and facial expression of the actor A 1 based on the first body motion data and the first face motion data related to the actor A 1 , and generate an animation of a character object moving in synchronization with the motions of the body and facial expression of the actor A 2 based on the second body motion data and the second face motion data related to the actor A 2 .
- a character object generated based on the motion and expression of the actor A 1 may be referred to as a “first character object”, and a character object generated based on the motion and expression of the actor A 2 may be referred to as a “second character object.”
- the video generation unit 21 d constructs a virtual space using the object data 23 b , and generates a video that includes the virtual space, the animation of the first character object corresponding to the actor A 1 , and the animation of the second character object corresponding to the actor A 2 .
- the first character object is disposed in the virtual space so as to correspond to the position of the actor A 1 with respect to the tracking sensor 36 a
- the second character object is disposed in the virtual space so as to corresponds to the position of the actor A 2 with respect to the tracking sensor 36 a .
- the video generation unit 21 d constructs a virtual space based on tracking information of the tracking sensor 36 a .
- the video generation unit 21 d determines configuration information (the position in the virtual space, a gaze position, a gazing direction, and the angle of view) of the virtual camera based on the tracking information of the tracking sensor 36 a .
- the video generation unit 21 d determines a rendering area in the entire virtual space based on the configuration information of the virtual camera and generates moving image information for displaying the rendering area in the virtual space.
- the video generation unit 21 d may be configured to determine the position and the orientation of the first character object and the second character object in the virtual space, and the configuration information of the virtual camera based on tracking information of the tracking sensor 36 b instead of or in addition to the tracking information of the tracking sensor 36 a.
- the video generation unit 21 d is able to include voices of the actor A 1 and the actor A 2 collected by the microphone in the studio unit 30 with the generated moving image.
- the video generation unit 21 d generates an animation of the first character object moving in synchronization with the movement of the body and facial expression of the actor A 1 , and an animation of the second character moving in synchronization with the movement of the body and facial expression of the actor A 2 .
- the video generation unit 21 d then includes the voices of the actor A 1 and the actor A 2 with the animations respectively to generate a video for distribution.
- the video distribution unit 21 e distributes the video generated by the video generation unit 21 d .
- the video is distributed to the client devices 10 a to 10 c and other client devices over the network 50 .
- the video distribution unit 21 e refers to the list of users who have requested distribution of the video (a distribution destination list), and distributes the video to the client devices of the users included in the list.
- a distribution destination list the list of users who have requested distribution of the video
- the video is distributed to users other than the user to whom distribution of the video is prohibited.
- the received video is reproduced in the client devices 10 a to 10 c.
- the video may be distributed to a client device (not shown) installed in the studio room R, and projected from the client device onto the screen S via a short focus projector.
- the video may also be distributed to the supporter computer 40 . In this way, the supporter B 1 and the supporter B 2 can check the viewing screen of the distributed video.
- FIG. 5 An example of the screen on which the video distributed from the server device 20 to the client device 10 a and reproduced by the client device 10 a is displayed is illustrated in FIG. 5 .
- a display image 70 of the video distributed from the server device 20 is displayed on the display of the client device 10 a .
- the display image 70 displayed on the client device 10 a includes a character object 71 A corresponding to the actor A 1 , a character object 71 B corresponding to the actor A 2 , a table object 72 a representing a table, in a virtual space.
- the object 72 is not a gift object, but is one of objects used for constructing a virtual space included in the object data 23 b .
- the character object 71 A is generated by applying the first body motion data and the first face motion data of the actor A 1 to the model data for the actor A 1 included in the model data 23 a .
- the character object 71 A is motion controlled based on the first body motion data and the first face motion data.
- the character object 71 B is generated by applying the second body motion data and the second face motion data of the actor A 2 to the model data for the actor A 2 included in the model data 23 a .
- the character object 71 B is motion controlled based on the second body motion data and the second face motion data.
- the character object 71 A is controlled to move in the screen in synchronization with the motions of the body and facial expression of the actor A 1
- the character object 71 B is controlled to move in the screen in synchronization with the motions of the body and facial expression of the actor A 2 .
- the video from the server device 20 may be distributed to the supporter computer 40 .
- the video distributed to the supporter computer 40 is displayed on the supporter computer 40 in the same manner as FIG. 5 .
- the supporter B 1 and the supporter B 2 are able to change the configurations of the components of the studio unit 30 while viewing the video reproduced by the supporter computer 40 .
- they can cause an instruction signal to change the orientation of the tracking sensor 36 a to be sent from the supporter computer 40 to the tracking sensor 36 a .
- the tracking sensor 36 a is able to change its orientation in accordance with the instruction signal.
- the tracking sensor 36 a may be rotatably attached to a stand via a pivoting mechanism that includes an actuator disposed around the axis of the stand.
- the actuator of the pivoting mechanism may be driven based on the instruction signal, and the tracking sensor 36 a may be turned by an angle according to the instruction signal.
- the supporter B 1 and the supporter B 2 may cause the supporter computer 40 to transmit an instruction for using the tracking information of the tracking sensor 36 b to the tracking sensor 36 a and the tracking sensor 36 b , instead of the tracking information from the tracking sensor 36 a .
- the tracking sensor 36 a and the tracking sensor 36 b may be configured and installed so as to be movable the actor or supporter. As a result, the actor or supporter can hold and move the tracking sensor 36 a and the tracking sensor 36 b.
- the supporter B 1 and the supporter B 2 when they may input a message indicating the instruction(s) into the supporter computer 40 and the message may be output to the display 39 .
- the supporter B 1 and the supporter B 2 can instruct the actor A 1 or the actor A 2 to change his/her standing position through the message displayed on the display 39 .
- the display request processing unit 21 f receives a display request to display a gift object from a client device of a viewing user, and performs processing according to the display request.
- Each viewing user is able to transmit a display request to display a gift object to the server device 20 by operating his/her client device.
- the first viewing user can transmit a display request to display a gift object to the server device 20 by operating the client device 10 a .
- the display request to display a gift object may include the user ID of the viewing user and the identification information (object ID) that identifies the object for which the display request is made.
- the gift object may include the effect object, the normal object, and the decorative object.
- the effect object and the normal object are examples of the first object.
- a display request for requesting display of the effect object or the normal object is an example of a second display request.
- the display request processing unit 21 f when the display request processing unit 21 f received a display request to display a specific effect object from a viewing user, the display request processing unit 21 f performs a process, in response to the display request, to display the effect object for which the display request is made in the display image 70 of the video. For example, when a display request to display an effect object simulating confetti is made, the display request processing unit 21 f displays in the display image 70 an effect object 73 simulating confetti based on the display request as shown in FIG. 6 .
- the display request processing unit 21 f when the display request processing unit 21 f received a display request to display a specific normal object from a viewing user, the display request processing unit 21 f performs a process, in response to the display request, to display the normal object for which the display request is made in the video 70 . For example, when a display request to display a normal object simulating a stuffed bear is made, the display request processing unit 21 f displays a normal object 74 simulating a stuffed bear in the display image 70 based on the display request as shown in FIG. 6 .
- the display request for the normal object 74 may include a display position specifying parameter for specifying the display position of the normal object 74 in the virtual space.
- the display request processing unit 21 f displays the normal object 74 at the position specified by the display position specifying parameter in the virtual space.
- the display position specifying parameter may specify the upper position of the table object 72 a representing a table as the display position of the normal object 74 .
- a viewing user is able to specify the position where the normal object is to be displayed by using the display position specifying parameter while watching the layouts of the character object 71 A, the character object 71 B, the gift object, and other objects included in the video 70 .
- the normal object 74 may be displayed such that it moves within the display image 70 of the video.
- the normal object 74 may be displayed such that it falls from the top to the bottom of the screen.
- the normal object 74 may be displayed in the display image 70 during the fall, which is from when the object starts to fall and to when the object has fallen to the floor of the virtual space of the video 70 , and may disappear from the display image 70 after it has fallen to the floor.
- a viewing user can view the falling normal object 74 from the start of the fall to the end of the fall.
- the moving direction of the normal object 74 in the screen can be specified as desired.
- the normal object 74 may be displayed in the display image 70 so as to move from the left to the right, the right to the left, the upper left to the lower left, or any other direction of the video 70 .
- the normal object 74 may move on various paths.
- the normal object 74 can move on a linear path, a circular path, an elliptical path, a spiral path, or any other paths.
- the viewing user may include, in the display request to display the normal object, a moving direction parameter that specifies the moving direction of the normal object 74 and/or a path parameter that specifies the path on which the normal object 74 moves, in addition to or in place of the display position specifying parameter.
- those whose size in the virtual space is smaller than a reference size may be displayed such that a part or all of the object(s) is overlapped with the character object 71 A and/or the character object 71 B.
- those whose size in the virtual space is larger than the reference size may be displayed at a position where the object is not overlapped with the character object.
- the object is displayed behind the overlapping character object.
- the display request processing unit 21 f when the display request processing unit 21 f received a display request to display a specific decorative object from a viewing user, the display request processing unit 21 f adds the decorative object for which the display request is made to the candidate list 23 d based on the display request.
- the display request to display the decorative object is an example of a first display request.
- the display request processing unit 21 f may store identification information (object ID) identifying the specific decorative object for which the display request has been made from the viewing user in the candidate list 23 d in association with the user ID of the viewing user (see FIG. 4 ).
- the user ID of the viewing user who made the display request and the decorative object ID of the decoration object for which the display request is made by the viewing user are associated with each other and stored in the candidate list 23 d.
- the decorative object selection unit 21 g in response to one or more of the decorative objects included in the candidate list 23 d being selected, performs a process to display the selected decorative object in the display image 70 of the video.
- a decorative object selected from the candidate list 23 d may be referred to as a “selected decorative object.”
- the selection of the decoration object from the candidate list 23 d is made, for example, by the supporter B 1 and/or the supporter B 2 who operate the supporter computer 40 .
- the supporter computer 40 displays a decorative object selection screen.
- FIG. 8 shows an example of a decorative object selection screen 80 in one embodiment.
- the decorative object selection screen 80 is displayed, for example, on the display of the supporter computer 40 .
- the decorative object selection screen 80 shows, for example, each of the plurality of decoration objects included in the candidate list 23 d in a tabular form.
- the decorative object selection screen 80 in one embodiment includes a first column 81 showing the type of the decoration object, a second column 82 showing the image of the decoration object, and a third column 83 showing the body part of a character object associated with the decoration object. Further, on the decorative object selection screen 80 , selection buttons 84 a to 84 c for selecting each decoration object are displayed. Thus, the decorative object selection screen 80 displays decorative objects that can be selected as the selected decorative object.
- the supporters B 1 and B 2 are able to select one or more of the decorative objects shown on the decoration object selection screen 80 .
- the supporter B 1 and the supporter B 2 are able to select a headband by selecting the selection button 84 a .
- the display request processing unit 21 f displays the selected decorative object 75 that simulates the selected headband on the display screen 70 of the video, as shown in FIG. 7 .
- the selected decorative object 75 is displayed on the display image 70 in association with a specific body part of a character object.
- the selected decorative object 75 may be displayed such that it contacts with the specific body part of the character object.
- the selected decorative object 75 simulating the headband is associated with the head of the character object, it is attached to the head of the character object 71 A as shown in FIG. 7 .
- the decorative object may be displayed on the display screen 70 such that it moves along with the motion of the specific part of the character object. For example, when the head of the character object 71 A with the headband moves, the selected decorative object 75 simulating the headband moves in accordance with the motion of the head of the character object 71 A as if the headband is attached to the head of the character object 71 A.
- the selected decorative object 75 may be displayed on the display screen 70 in association with the character object 71 B instead of the character object 71 A.
- the selected decorative object 75 may be displayed on the display screen 70 in association with the character object 71 A and the character object 71 B.
- the decorative object selection screen 80 may be configured to exclude information identifying a user who holds the decorative object or a user who has made a display request to display the decorative object. By configuring the decorative object selection screen 80 in this manner, it is possible to prevent a selector from giving preference to a particular user when selecting a decorative object.
- the decorative object selection screen 80 may display, for each decorative object, information regarding a user who holds the decorative object or a user who made a display request for the decorative object.
- Such information displayed for each decorative object may include, for example, the number of times that the user who made this display request of the decorative object has made display requests of the decorative object so far and the number of times that the decorative object has been actually selected (for example, information indicating that the display request to display the decorative object has been made five times and the decorative object has been selected two times among the five times), the number of times that the user views the video of the character object 71 A and/or the character object 71 B, the number of times that the user views a video (regardless of whether the character object 71 A and/or the character object 71 B appears in the video or not), the amount of money which the user spent for the gift object, the number of times that the user purchases the object, the points possessed by the user that can be used in the video distribution system 1 , the level of the user in the video distribution system 1 , and any other
- a constraint(s) may be imposed on the display of decorative objects to eliminate overlapping.
- a decorative object associated with the specific body part of the character object if a decorative object associated with the specific body part of the character object is already selected, selection of other decorative objects associated with the specific body part may be prohibited.
- the other decorative objects associated with the “head” for example, a decorative object simulating a “hat” associated with the head
- a selection button for selecting the decorative object simulating the hat is made unselectable on decorative object selection screen 80 .
- the decorative object selection screen 80 may be displayed on another device instead of or in addition to the supporter computer 40 .
- the decorative object selection screen 80 may be displayed on the display 39 and/or the screen S in the studio room R.
- the actor A 1 and the actor A 2 are able to select a desired decorative object based on the decorative object selection screen 80 displayed on the display 39 or the screen S. Selection of the decorative object by the actor A 1 and the actor A 2 maybe made, for example, by operating the controller 33 a , the controller 33 b , the controller 34 a , or the controller 34 b.
- the object purchase processing unit 21 h transmits, to a client device of the viewing user (for example, the client device 10 a ), purchase information of each of the plurality of gift objects that can be purchased in relation to the video.
- the purchase information of each gift object may include the type of the gift object (the effect object, the normal object, or the decorative object), the image of the gift object, the price of the gift object, and any other information necessary to purchase the gift object.
- the viewing user is able to select a gift object to purchase it considering the gift object purchase information displayed on the client device 10 a .
- the selection of the gift objects which the viewing user purchases may be performed by operating the client device 10 a .
- a purchase request for the gift object is transmitted to the server device 20 .
- the object purchase processing unit 21 h performs a payment process based on the purchase request.
- the purchased gift object is held by the viewing user.
- the object ID of the purchased gift object is stored in the possession list 23 c in association with the user ID of the viewing user who purchased the object.
- Gift objects that can be purchased may be different for each video.
- the gift objects may be made purchasable in two or more different videos. That is, the purchasable gift objects may include a gift object unique to each video and a common gift object that can be purchased in the videos.
- the effect object that simulates confetti may be the common gift object that can be purchased in the two or more different videos.
- the purchased effect object when a user purchases an effect object while viewing a video, the purchased effect object may be displayed automatically in the video that the user is viewing in response to completion of the payment process to purchase the effect object.
- the purchased normal object when a user purchases a normal object while viewing a video, the purchased normal object may be automatically displayed in the video that the user is viewing in response to completion of the payment process to purchase the normal object.
- a notification of the completion of the payment process may be sent to the client device 10 a , and a confirmation screen may be displayed to confirm whether the viewing user wants to make a display request to display the purchased effect object on the client device 10 a .
- the display request to display the purchased effect object may be sent from the client device of the viewing user to the display request processing unit 21 f , and the display request processing unit 21 f may perform the process to display the purchased effect object in the display image 70 of the video.
- a confirmation screen may be displayed on the client device 10 a to confirm whether the viewing user wants to make a display request to display the purchased normal object, in the same manner as above.
- the supporter computer 40 includes a computer processor 41 , a communication I/F 42 , a storage 43 , a display 44 , and an input interface 45 .
- the computer processor 41 may be any computing device such as a CPU.
- the communication I/F 22 may be, a driver, software, or a combination thereof for communicating with other devices.
- the storage 43 may be a storage device capable of storing data such as a magnetic disk.
- the display 44 may be a liquid crystal display, an organic EL display, an inorganic EL display or any other display device capable of displaying images.
- the input interface 45 may be any input interface that receives input from the supporter such as a mouse and a keyboard.
- the display 44 is an example of a second display device.
- FIG. 9 illustrates an example of an image displayed on the display 44 of the supporter computer 40 .
- a display image 46 displayed on the display 44 includes a first display area 47 a for displaying a display screen of a video in a first type client device (for example, a personal computer), a second display area 47 b for displaying a display screen of the video in a second type client device (for example, a smart phone), and a plurality of operation icons 48 .
- the supporter By monitoring the video displayed in the first display area 47 a , the supporter is able to check whether the video distributed to the first type client device is normally displayed. Similarly, by monitoring the video displayed in the second display area 47 b , the supporter is able to check whether the video distributed to the second type client device is normally displayed.
- the operation icon 48 includes a first operation icon 48 a , a second operation icon 48 b , a third operation icon 48 c , a fourth operation icon 48 d , and a fifth operation icon 48 e .
- the supporter is able to select a desired operation icon via the input interface 45 .
- the supporter computer 40 receives a first operation input, a second operation input, a third operation input, a fourth operation input, and a fifth operation input, respectively.
- the computer processor 41 functions as an additional information display unit 41 a and a distribution management unit 41 b by executing computer-readable instructions included in a predetermined program. At least some of the functions that can be realized by the computer processor 41 may be realized by a computer processor other than the computer processor 41 of the video distribution system 1 . At least some of the functions realized by the computer processor 41 may be realized by, for example, the computer processor 21 .
- the additional information display unit 41 a is configured to add various additional information to the video in accordance with various operation inputs from the supporter via the input interface 45 .
- the additional information display unit 41 a when the supporter computer 40 receives the first operation input made by selecting the first operation icon 48 a , the additional information display unit 41 a is configured to display a message 101 in the video displayed on the display 39 based on the first operation input, while the additional information display unit 41 a makes the message 101 undisplayed on the client device.
- the message 101 is an example of first additional information.
- FIG. 10 shows an example of a display image of the video displayed on the display 39 .
- the display image 100 shown in FIG. 10 includes the message 101 .
- the display image 100 is similar to the display image 70 (see FIG. 5 ) of the video distributed and displayed on the client device of the viewing user at the same time except that the display image 100 includes the message 101 . That is, the display image 100 displayed on the display 39 is configured by adding the message 101 to the video being distributed.
- the message 101 is, for example, a text message input by a supporter.
- the message 101 may include image information instead of text format information or in addition to the text format information.
- the message 101 may include an instruction on an actor's performance, an instruction on an actor's statement, an instruction on an actor's position, and various other instructions or messages related to the live distributed video.
- a window that allows the supporter to input a message is displayed on the display 44 of the supporter computer 40 .
- the supporter is able to input a message in a message input area of the window by operating the input interface 45 .
- the inputted message is transmitted from the supporter computer 40 to the display 39 .
- the display 39 and the client devices 10 a to 10 c display the video distributed by the video distribution unit 21 e .
- the message received from the supporter computer 40 is displayed as a message 101 in a predetermined area of the video distributed by the video distribution unit 21 e as shown in FIG. 10 .
- the message 101 or information corresponding thereto is not displayed. According to this embodiment, it is possible to communicate instructions and messages regarding the live distributed video to the actor or other distribution staff member who can see the display 39 without affecting the video viewed by viewing users of the client devices.
- the additional information display unit 41 a is configured to display an interruption image in the video displayed on the display 39 and the video displayed on the client devices based on the second operation input.
- the interruption image is an example of the second additional information.
- the second operation input may be input to the server device 20 or the supporter computer 40 in accordance with operation of the controller 33 a or 33 b by the actor A 1 or operation of the controller 34 a or 34 b by the actor A 2 .
- FIG. 11 a shows an example of a display image 110 a displayed on the client device 10 a when the second operation input is received
- FIG. 11 b shows an example of a display image 110 b displayed on the display 39 when the second operation input received.
- Both the display image 110 a and the display image 110 b include an interruption image 111 .
- the interruption image 111 is disposed in the top layer of the live distributed video.
- the interruption image 111 is displayed on the screen of the client device and the display 39 on which the video is being played.
- the interruption image 111 is an image that is displayed in emergency instead of displaying a video containing a normal virtual space and a character object when an unexpected situation occurs during a live distribution of the video.
- the supporter is able to select the second operation icon 48 b on the supporter computer 40 , for example, when trouble occurs in motion control of the character object due to a failure of equipment used in the studio room R.
- the additional information display unit 41 a performs processing for displaying the interruption image 111 on each client device and the display 39 in response to the selection of the second operation icon 48 b .
- the additional information display unit 41 a may cause the interruption image 111 to be superimposed on the live-distributed video.
- the additional information display unit 41 a may transmit a control signal for displaying the interruption image 111 to the client device.
- the client device that received the control signal may perform a process to superimpose the interruption image 111 on the video being played or a process to display the interruption image 111 instead of the video being played.
- the additional information display unit 41 a causes the display 39 to superimpose on the interrupt image 111 to be a message. 101 may be displayed.
- the interruption image 111 that is displayed in emergency is distributed to the client devices and the display 39 instead of continuing to distribute the video.
- the distributor can handle the situation while the interruption image 111 is displayed.
- the additional information display unit 41 a is configured to generate a modified character object and performs processing for distribution of a video that contains the modified character object.
- the third operation input may be input to the server device 20 or the supporter computer 40 in accordance with operation of the controller 33 a or 33 b by the actor A 1 or operation of the controller 34 a or 34 b by the actor A 2 .
- FIG. 12 a shows another example of a display image 120 a displayed on the client device 10 a
- FIG. 12 b shows an example of a display image 120 b displayed on the client device 10 a
- the display image 120 a of FIG. 12 a includes the character object 71 B
- the display image 120 a of FIG. 12 b includes a modified character object 171 B which will be described later.
- facial expression of the character object 71 B is controlled so as to change in synchronization with the change of the facial expression of the actor A 2 based on the face motion data of the actor A 2 .
- a face portion all or part of the portion of the character object that changes based on the face motion data may be referred to as a “face portion.”
- motion control is performed such that eyes of the character object 71 B moves in synchronization with the eye motion of the actor A 2 , so that the face portion 121 is set in the position where includes both eyes of the character object 71 B.
- the face portion 121 may be set to include the entire face of the character object 71 B.
- the image of the eyes of the character object 71 B displayed in the face portion 121 is generated by applying the face motion data to the model data.
- the image of the eyes displayed in the face portion 121 in the display image 120 b of FIG. 12 b is a prepared image 122 b that is prepared in advance before the start of the video distribution to be fitted in the face portion 121 .
- the prepared image 122 b may be stored in the storage 23 .
- the prepared image 122 b is an example of third additional information.
- the additional information display unit 41 a is configured to display the prepared image 122 b composited in the face portion 121 of the character object 71 B when the third operation input is received. For example, when the third operation input is received, the additional information display unit 41 a transmits a modification instruction to the animation generation unit 21 c , and the animation generation unit 21 c composites the prepared image 122 b to be fitted in the face portion 121 of the character object 71 B in accordance with the modification instruction from the additional information display unit 41 a instead of using the image generated based on the face motion data in order to generate an animation of the modified character object 171 B.
- a character object into which the prepared image 122 b is inserted, not the image generated based on the face motion data may be referred to as a modified character object.
- the modified character object is an object in which a part of the character object generated by the animation generation unit 21 c is modified.
- the animation of the modified character object 171 B is generated by compositing the prepared image 122 b to be displayed in the face portion 121 of the character object 71 B generated by the animation generation unit 21 c .
- the additional information display unit 41 a may transmit a control signal for displaying the prepared image 122 b to the client device.
- the client device may perform a process to composite the prepared image 122 b to be displayed in the character object in the video being played.
- the process of applying the face motion data to the model data to change the facial expression of a character object imposes a high processing load on the processor. For this reason, the facial expression of the character object may fail to follow the facial expression of the actor in a timely manner. Since character's voice and motions other than the facial expression can follow timely to the voice and body motions of the actor movement, if the facial expression of the character object fails to follow the facial expression of the actor in a timely manner, it may give the viewing users a feeling of strangeness.
- the modified character object that incorporates the prepared image 122 b to be fitted therein is generated, and the video containing the modified character object is distributed. In this way, it is possible to prevent deterioration of the quality of the video caused by the fact that the facial expression of the character object fails to follow the facial expression of the actor.
- the storage 23 may store a plurality of different types of images as candidates of the prepared image 122 b to be composited.
- the candidates of the prepared image 122 b are displayed on the display 44 of the supporter computer 40 , and an image selected by the supporter from among the candidates may be used as the prepared image 122 b to be composited.
- the additional information display unit 41 a displays a blind object that is used to hide at least a part of the character object that wears the decorative object.
- the fourth operation input may be input to the server device 20 or the supporter computer 40 in accordance with operation of the controller 33 a or 33 b by the actor A 1 or operation of the controller 34 a or 34 b by the actor A 2 .
- the additional information display unit 41 a may transmit a control signal for displaying the prepared image 122 b to the client device.
- the client device that received the control signal may display the blind object such that the blind object is superimposed on the video being played.
- the additional information display unit 41 a may display the blind object such that the blind object hides a part of the character object 71 B that is associated with the selected decorative object.
- a blind object 131 is displayed such that at least the torso of the character object 71 B associated with the T-shirt, which is the selected decorative object, is hidden by the blind object 131 .
- a process for prohibiting the distribution of a video to a banned user will be described.
- the distribution management unit 41 b performs a process for prohibiting distribution of the video to a specific viewing user or a specific client device.
- the fifth operation icon 48 e is selected during distribution of a video, a list of users who are viewing the video is displayed.
- the distribution management unit 41 b performs a process for ceasing the distribution of the video to a user(s) selected by the supporter or the actor from the list.
- the distribution management unit 41 b may flag the selected user (distribution banned user) to identify the user in the distribution destination list of the video.
- the video distribution unit 21 e may distribute the video to users with no flag that is used to identify the banned user, among the users included in the distribution destination list of the video. In this way, the distribution of the video to the banned user is stopped.
- the distribution management unit 41 b may be configured to make a user(s) selected by the supporter or the actor from the list of the users who are viewing the live distributed video inaccessible to some of the functions that are normally available to the users.
- the distribution management unit 41 b may be configured to prohibit a user who has posted an inappropriate message from posting of a new message. The user who is prohibited from posting a message is allowed to continue viewing the video even after the prohibition, but is no longer allowed to post a message on the video.
- FIG. 14 is a flow chart showing a flow of a video distribution process in one embodiment
- FIG. 15 is a flowchart of a process of displaying the first additional information in one embodiment
- FIG. 16 is a flowchart of a process of displaying the second additional information in one embodiment
- FIG. 17 is a flowchart of a process of displaying the third additional information in one embodiment
- FIG. 18 is a flowchart of a process of displaying the blind object in one embodiment
- FIG. 19 is a flow chart showing a flow of a process for prohibiting the video distribution to a banned user in one embodiment.
- step S 11 body motion data, which is a digital representation of the body motions of the actor A 1 and the actor A 2
- face motion data which is a digital representation of the facial motions (expression) of the actor A 1 and the actor A 2 .
- generation of the body motion data is performed, for example, by the body motion data generation unit 21 a described above
- generation of the face motion data is performed, for example, by the face motion data generation unit 21 b described above.
- step S 12 the body motion data and the face motion data of the actor A 1 are applied to the model data for the actor A 1 to generate animation of the first character object that moves in synchronization with the motions of the body and facial expression of the actor A 1 .
- the body motion data and the face motion data of the actor A 2 are applied to the model data for the actor A 2 to generate animation of the second character object that moves in synchronization with the motions of the body and facial expression of the actor A 2 .
- the generation of the animation is performed, for example, by the above-described animation generation unit 21 c.
- step S 13 a video including the animation of the first character object corresponding to the actor A 1 and the animation of the second character object corresponding to the actor A 2 is generated.
- the voices of the actor A 1 and the actor A 2 may be included in the video.
- the animation of the first character object and the animation of the second character object may be provided in the virtual space. Generation of the video is performed, for example, by the above-described video generation unit 21 d.
- step S 14 the process proceeds to step S 14 and the video generated in step S 13 is distributed.
- the video is distributed to the client devices 10 a to 10 c and other client devices over the network 50 .
- the video may be distributed to the supporter computer 40 and/or may be projected on the screen S in the studio room R.
- the video is distributed continuously over a predetermined distribution period.
- the distribution period of the video may be set to, for example, 30 seconds, 1 minute, 5 minutes, 10 minutes, 30 minutes, 60 minutes, 120 minutes, and any other length of time.
- step S 15 it is determined whether a termination condition for ending the distribution of the video is satisfied.
- the end condition is, for example, that the distribution ending time has come, that the supporter computer 40 has issued an instruction to end the distribution, or any other conditions. If the end condition is not satisfied, the steps S 11 to S 14 of the process are repeatedly executed, and distribution of the video including the animation synchronized with the motions of the actor A 1 and the actor A 2 is continued. When it is determined that the end condition is satisfied for the video, the distribution process of the video is ended.
- step S 21 it is determined whether the first operation input has been made during the video live-distribution. For example, when the first operation icon 48 a is selected on the supporter computer 40 , it is determined that the first operation input has been made.
- the display process of the first additional information proceeds to step S 22 .
- step S 22 the message 101 , which is an example of the first additional information, is shown on the video displayed on the display 39 , and the display process to display the first additional information ends. The message 101 is not displayed on the client device at this time.
- step S 31 it is determined whether the second operation input has been made during the video live-distribution. For example, when the second operation icon 48 b is selected on the supporter computer 40 , it is determined that the second operation input has been made.
- the display process of the second additional information proceeds to step S 32 .
- step S 32 the interruption image 111 , which is an example of the second additional information, is displayed on the client device(s) and the display 39 , and the display process to display the second additional information ends.
- step S 41 it is determined whether the third operation input has been made during the video live-distribution. For example, when the third operation icon 48 c is selected on the supporter computer 40 , it is determined that the third operation input has been made.
- the display process to display the third additional information proceeds to step S 42 .
- step S 42 the prepared image 122 b , which is an example of the third additional information, is displayed in the face portion of the character object, and the display process to display the third additional information ends.
- step S 51 it is determined whether selection of a decorative object that is to be attached to the character object appearing in the video has been made during the video live-distribution.
- the decorative object attached to the character object may be selected from among the candidates in the candidate list 23 d .
- the process to select the decorative object is performed, for example, by the above-mentioned decorative object selection unit 21 g.
- step S 51 When it is determined in step S 51 that the decorative object has been selected, the process proceeds to step S 52 .
- step S 52 it is determined whether the fourth operation input has been made. For example, when the fourth operation icon 48 d is selected on the supporter computer 40 , it is determined that the fourth operation input has been made. When the fourth operation input has been made, the display process to display the blind object proceeds to step S 53 . In another embodiment, when the decorative object is selected from the candidate list 23 d , it may be determined that the fourth operation input has been made. In this case, step S 52 is omitted, and the display process to display the blind object proceeds from step S 51 to step S 53 .
- step S 53 the blind object 131 is displayed such that it hides at least a part of the character object.
- the decorative object selected in step S 51 is a T-shirt
- the blind object 131 is added to the video so as to hide the torso of the character object where is associated with the T-shirt.
- step S 61 it is determined whether the fifth operation input has been made during a video live-distribution. For example, when the fifth operation icon 48 e is selected on the supporter computer 40 , it is determined that the fifth operation input has been made.
- step S 62 a user (banned user) to whom the video is prohibited from being distributed is designated. For example, a list of users who are watching the video is displayed on the display 44 of the supporter computer 40 , and a user selected by the supporter from this list is designated as the banned user.
- the process for designating the banned user is performed by, for example, the distribution management unit 41 b.
- step S 63 the video is distributed only to users who are not designated as the banned users among the users included in the distribution destination list of the video.
- the video distribution process is performed, for example, by the above-described video distribution unit 21 e.
- the video distribution system 1 in the above embodiment it is easy to handle with an unexpected situation occurred during a live distribution. For example, by displaying the message 101 on the display 39 without displaying the message 101 on the client device(s), it is possible to communicate instructions and messages regarding the live distributed video to the actor or other distribution staff member who can see the display 39 without affecting the video viewed by viewing users.
- the interruption image 111 displayed in emergency is distributed to the client devices and the display 39 instead of continuing to distribute the video.
- the modified character object that incorporates the prepared image 122 b to be fitted therein is generated, and the video containing the modified character object is distributed. In this way, it is possible to prevent deterioration of the quality of the video caused by the fact that the facial expression of the character object fails to follow the facial expression of the actor.
- Embodiments of the invention are not limited to the above-described embodiments, and various modifications can be made without departing from the scope of the invention.
- shooting and production of the video to be distributed may be performed outside the studio room R.
- video shooting to generate a video to be distributed may be performed at an actor's or supporter's home.
Landscapes
- Engineering & Computer Science (AREA)
- Multimedia (AREA)
- Signal Processing (AREA)
- Databases & Information Systems (AREA)
- Computer Graphics (AREA)
- Computer Security & Cryptography (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Theoretical Computer Science (AREA)
- Computer Vision & Pattern Recognition (AREA)
- General Engineering & Computer Science (AREA)
- Processing Or Creating Images (AREA)
Abstract
A video distribution system in one embodiment comprises one or more processors. In one aspect, the one or more processors is configured to live-stream a video containing an animation of a character object generated based on motions of an actor to a first client device used by a first user; and cause an interruption image to be displayed in an upper layer of the video based on a first operation input from the actor.
Description
- This application is a Continuation application of U.S. Ser. No. 17/405,599 (filed on Aug. 18, 2021) which is a Continuation application of U.S. Ser. No. 16/407,733 (filed on May 9, 2019) and claims the benefit of priority from Japanese Patent Application Serial No. 2018-90907 (filed on May 9, 2018) and Japanese Patent Application Serial No. 2018-224331 (filed on Nov. 30, 2018), the contents of which are hereby incorporated by reference in its entirety.
- The present disclosure relates to a video distribution system for live distribution of a video containing animation of a character object generated based on motions of an actor.
- Video distribution systems that produce an animation of a character object based on actor's motions and live distribute a video containing the animation of the character object have been known. Such a video distribution system is disclosed, for example, in Japanese Patent Application Publication No. 2015-184689 (“the '689 Publication”).
- During a live distribution of a video, unexpected situations such as a failure of a device that is used to generate animation may occur. In the above-mentioned conventional video distribution system, the live distribution of the video is carried out only by an actor and an operator of a camera. If the actor or the camera operator has to handle such an unexpected situation during the live distribution, his/her motions to work on the failure or the like may be reflected in motions of a character object, or inappropriate camerawork may occur. Consequently, the quality of the live distributed video may be adversely affected.
- It is an object of the present disclosure to provide a technical improvement which solves or alleviates at least part of the drawbacks of the prior art mentioned above. More specifically, one object of the disclosure is to provide a video distribution system that makes it easier to handle with an unexpected situation occurred during a live distribute.
- A video distribution system according to one aspect includes a distribution server configured to live distribute a video that contains an animation of a character object generated based on actor's motions to a first client device used by a first user, a first display device disposed at a position viewable by the actor and displaying the video, and a supporter computer causing first additional information to be displayed in the video displayed on the first display device based on a first operation input while causing the first additional information to be undisplayed on the first client device.
- In the video distribution system, the supporter computer may cause second additional information to be displayed in the video displayed on the first display device and in the video displayed on the first client device based on a second operation input.
- In the video distribution system, the supporter computer may cause a modified character object in which at least a part of the character object is modified to be generated based on a third operation input and cause the modified character object to be included in the video.
- In the video distribution system, the character object may have a face portion that is generated so as to move based on face motion data representing face motions of the actor, and the distribution server device may display third additional information in the face portion based on the third operation input.
- The video distribution system may further include a storage storing a decorative object displayed in association with the character object, and the supporter computer may display a blind object for hiding at least a part of the character object when the decorative object is displayed in the video based on a forth operation input.
- In the video distribution system, the distribution server device may be configured to live distribute the video to a second client device used by a second user in addition to the first client device, the first client device is a first type device, the second client device is a second type device different from the first type device, and the supporter computer may include a second display device that displays a display image of the video displayed on the first client device and a display image of the video displayed on the second client device.
- In the video distribution system, the supporter computer may prohibit distribution of the video to the first user based on a fifth operation input.
- Another aspect of the invention relates to a method of distributing a video performed by executing computer readable instructions by one or more computer processor. The method includes live distributing a video that contains an animation of a character object generated based on actor's motions to a first client device used by a first user, displaying the video on a first display device disposed at a position viewable by the actor, and displaying first additional information in the video displayed on the first display device based on a first operation input while causing the first additional information to be undisplayed on the first client device.
- Yet another aspect of the invention relates to a computer-readable tangible non-transitory storage medium including a program executed by one or more computer processors. The computer program causes the one or more computer processors to perform live distributing a video that contains an animation of a character object generated based on actor's motions to a first client device used by a first user, displaying the video on a first display device disposed at a position viewable by the actor, and displaying first additional information in the video displayed on the first display device based on a first operation input while causing the first additional information to be undisplayed on the first client device.
- According to the aspects mentioned above, it is possible to provide a video distribution system that makes it easier to handle with an unexpected situation occurred during a live distribution.
-
FIG. 1 is a block diagram illustrating a video distribution system according to one embodiment. -
FIG. 2 schematically illustrates a installation of a studio where production of a video that is distributed in the video distribution system ofFIG. 1 is performed. -
FIG. 3 illustrates a possession list stored in the video distribution system ofFIG. 1 . -
FIG. 4 illustrates a candidate list stored in the video distribution system ofFIG. 1 . -
FIG. 5 illustrates an example of a video displayed on theclient device 10 a in one embodiment. An animation of a character object is included inFIG. 5 . -
FIG. 6 illustrates an example of a video displayed on theclient device 10 a in one embodiment. A normal object is included inFIG. 6 . -
FIG. 7 illustrates an example of a video displayed on theclient device 10 a in one embodiment. A decorative object is included inFIG. 7 . -
FIG. 8 schematically illustrates an example of a decorative object selection screen for selecting a desired decorative object from among the decorative objects included in the candidate list. -
FIG. 9 illustrates an example of an image displayed on adisplay 44 of asupporter computer 40 in one embodiment. -
FIG. 10 illustrates an example of an image displayed on adisplay 39 of asupporter computer 40 and an example of first additional information in one embodiment. -
FIG. 11 a illustrates an example of a video displayed on theclient device 10 a in one embodiment, andFIG. 11 a includes second additional information. -
FIG. 11 b illustrates an example of the video displayed on theclient device 10 a in one embodiment, andFIG. 11 b includes the second additional information. -
FIG. 12 a illustrates an example of a video displayed on theclient device 10 a in one embodiment, andFIG. 12 a includes third additional information. -
FIG. 12 b illustrates an example of the video displayed on theclient device 10 a in one embodiment, andFIG. 12 b includes third additional information. -
FIG. 13 illustrates an example of a video displayed on theclient device 10 a in one embodiment, andFIG. 13 includes a blind object. -
FIG. 14 is a flow chart showing a flow of a video distribution process in one embodiment. -
FIG. 15 is a flowchart of a process of displaying the first additional information in one embodiment. -
FIG. 16 is a flowchart of a process of displaying the second additional information in one embodiment. -
FIG. 17 is a flowchart of a process of displaying the third additional information in one embodiment. -
FIG. 18 is a flowchart of a process of displaying a blind object in one embodiment. -
FIG. 19 is a flow chart showing a flow of a process for prohibiting video distribution to a banned user in one embodiment. - Various embodiments of the disclosure will be described hereinafter with reference to the accompanying drawings. Throughout the drawings, the same or similar elements are denoted by the same reference numerals.
- With reference to
FIGS. 1 to 4 , a video distribution system according to an embodiment will be described.FIG. 1 is a block diagram illustrating avideo distribution system 1 according to one embodiment,FIG. 2 schematically illustrates an installation of a studio where a video to be distributed in thevideo distribution system 1 is produced, andFIGS. 3 to 4 are for describing information stored in thevideo distribution system 1. - The
video distribution system 1 includesclient devices 10 a to 10 c, aserver device 20, astudio unit 30, and astorage 60. Theclient devices 10 a to 10 c, theserver device 20, and thestorage 60 are communicably interconnected over anetwork 50. Theserver device 20 is configured to distribute a video including an animation of a character, as described later. The character included in the video may be motion controlled in a virtual space. - The video may be distributed from the
server device 20 to each of theclient devices 10 a to 10 c. A first viewing user who is a user of theclient device 10 a, a second viewing user who is a user of the client device 10 b, and a third viewing user who is a user of theclient device 10 c are able to view the distributed video with their client device. Thevideo distribution system 1 may include less than three client devices, or may include more than three client devices. In this specification, when it is not necessary to distinguish theclient devices 10 a to 10 c and other client devices from each other, they may be collectively referred to as “client devices.” - The
client devices 10 a to 10 c are information processing devices such as smartphones. In addition to the smartphone, theclient devices 10 a to 10 c each may be a mobile phone, a tablet, a personal computer, an electronic book reader, a wearable computer, a game console, or any other information processing devices that are capable of playing videos. Each of theclient devices 10 a to 10 c may include a computer processor, a memory unit, a communication I/F, a display, a sensor unit including various sensors such as a gyro sensor, a sound collecting device such as a microphone, and a storage for storing various information. - Viewing users are able to input a message regarding the distributed video via an input interface of the
client devices 10 a to 10 c to post the message to theserver device 20. The message posted from each viewing user may be displayed such that it is superimposed on the video. In this way, interaction between viewing users is achieved. - In the illustrated embodiment, the
server device 20 includes acomputer processor 21, a communication I/F 22, and astorage 23. - The
computer processor 21 is a computing device which loads various programs realizing an operating system and various functions from thestorage 23 or other storage into a memory unit and executes instructions included in the loaded programs. Thecomputer processor 21 is, for example, a CPU, an MPU, a DSP, a GPU, any other computing device, or a combination thereof. Thecomputer processor 21 may be realized by means of an integrated circuit such as ASIC, PLD, FPGA, MCU, or the like. Although thecomputer processor 21 is illustrated as a single component inFIG. 1 , thecomputer processor 21 may be a collection of a plurality of physically separate computer processors. In this specification, a program or instructions included in the program that are described as being executed by thecomputer processor 21 may be executed by a single computer processor or distributed and executed by a plurality of computer processors. Further, a program or instructions included in the program executed by thecomputer processor 21 may be executed by a plurality of virtual computer processors. - The communication I/
F 22 may be implemented as hardware, firmware, or communication software such as a TCP/IP driver or a PPP driver, or a combination thereof. Theserver device 20 is able to transmit and receive data to and from other devices via the communication I/F 22. - The
storage 23 is an storage device accessed by thecomputer processor 21. Thestorage 23 is, for example, a magnetic disk, an optical disk, a semiconductor memory, or various other storage device capable of storing data. Various programs may be stored in thestorage 23. At least some of the programs and various data that may be stored in thestorage 23 may be stored in a storage (for example, a storage 60) that is physically separated from theserver device 20. - Most of components of the
studio unit 30 are disposed, for example, in a studio room R shown inFIG. 2 . As illustrated inFIG. 2 , an actor A1 and an actor A2 give performances in the studio room R. Thestudio unit 30 is configured to detect motions and expressions of the actor A1 and the actor A2, and to output the detection result information to theserver device 20. - Both the actor A1 and the actor A2 are objects whose motions and expressions are captured by a sensor group provided in the
studio unit 30, which will be described later. The actor A1 and the actor A2 are, for example, humans, animals, or moving objects that give performances. The actor A1 and the actor A2 may be, for example, autonomous robots. The number of actors in the studio room R may be one or three or more. - The
studio unit 30 includes sixmotion sensors 31 a to 31 f attached to the actor A1, acontroller 33 a held by the left hand of the actor A1, acontroller 33 b held by the right hand of the actor A1, and acamera 37 a attached to the head of the actor A1 via anattachment 37 b. Thestudio unit 30 also includes sixmotion sensors 32 a to 32 f attached to the actor A2, acontroller 34 a held by the left hand of the actor A2, acontroller 34 b held by the right hand of the actor A2, and acamera 38 a attached to the head of the actor A2 via anattachment 38 b. A microphone for collecting audio data may be provided to each of theattachment 37 b and theattachment 38 b. The microphone can collect speeches of the actor A1 and the actor A2 as voice data. The microphones may be wearable microphones attached to the actor A1 and the actor A2 via theattachment 37 b and theattachment 38 b. Alternatively the microphones may be installed on the floor, wall or ceiling of the studio room R. In addition to the components described above, thestudio unit 30 includes abase station 35 a, abase station 35 b, a trackingsensor 36 a, a trackingsensor 36 b, and adisplay 39. Asupporter computer 40 is installed in a room next to the studio room R, and these two rooms are separated from each other by a glass window. Theserver device 20 may be installed in the same room as the room in which thesupporter computer 40 is installed. - The
motion sensors 31 a to 31 f and themotion sensors 32 a to 32 f cooperate with thebase station 35 a and thebase station 35 b to detect their position and orientation. In one embodiment, thebase station 35 a and thebase station 35 b are multi-axis laser emitters. Thebase station 35 a emits flashing light for synchronization and then emits a laser beam about, for example, a vertical axis for scanning. Thebase station 35 a emits a laser beam about, for example, a horizontal axis for scanning. Each of themotion sensors 31 a to 31 f and themotion sensors 32 a to 32 f may be provided with a plurality of optical sensors for detecting incidence of the flashing lights and the laser beams from thebase station 35 a and thebase station 35 b, respectively. Themotion sensors 31 a to 31 f and themotion sensors 32 a to 32 f each may detect a time difference between an incident timing of the flashing light and an incident timing of the laser beam, time when each optical sensor receives the light and or beam, an incident angle of the laser light detected by each optical sensor, and any other information as necessary. Themotion sensors 31 a to 31 f and themotion sensors 32 a to 32 f may be, for example, Vive Trackers provided by HTC CORPORATION. Thebase station 35 a and thebase station 35 b may be, for example, base stations provided by HTC CORPORATION. Three or more base stations may be provided. The position of the base station may be changed as appropriate. For example, in addition to or instead of the base station disposed at the upper corner of the space to be detected by the tracking sensor, a pair of the base stations may be disposed at a upper position and a lower position close to the floor. - Detection information (hereinafter may also be referred to as “tracking information”) obtained as a result of detection performed by each of the
motion sensors 31 a to 31 f and themotion sensors 32 a to 32 f is transmitted to theserver device 20. The detection result information may be wirelessly transmitted to theserver device 20 from each of themotion sensors 31 a to 31 f and themotion sensors 32 a to 32 f. Since thebase station 35 a and thebase station 35 b emit flashing light and a laser light for scanning at regular intervals, the tracking information of each motion sensor is updated at each interval. The position and the orientation of each of themotion sensors 31 a to 31 f and themotion sensors 32 a to 32 f may be calculated based on the tracking information detected by themotion sensors 31 a to 31 f and themotion sensors 32 a to 32 f. The position and the orientation of each of themotion sensors 31 a to 31 f and themotion sensors 32 a to 32 f may be calculated based on, for example, the tracking information in theserver device 20. - In the illustrated embodiment, the six
motion sensors 31 a to 31 f are mounted on the actor A. Themotion sensors motion sensors 31 a to 31 f may each be attached to the actor A1 via an attachment. The sixmotion sensors 32 a to 32 f are mounted on the actor A2. Themotion sensors 32 a to 32 f may be attached to the actor A2 at the same positions as themotion sensors 31 a to 31 f. Themotion sensors 31 a to 31 f and themotion sensors 32 a to 32 f shown inFIG. 2 are merely an example. Themotion sensors 31 a to 31 f may be attached to various parts of the body of the actor A1, and themotion sensors 32 a to 32 f may be attached to various parts of the body of the actor A2. The number of motion sensors attached to the actor A1 and the actor A2 may be less than or more than six. As described above, body movements of the actor A1 and the actor A2 are detected by detecting the position and the orientation of themotion sensors 31 a to 31 f and themotion sensors 32 a to 32 f attached to the body parts of the actor A1 and the actor A2. - In one embodiment, a plurality of infrared LEDs are mounted on each of the motion sensors attached to the actor A1 and the actor A2, and light from the infrared LEDs are sensed by infrared cameras provided on the floor and/or wall of the studio room R to detect the position and the orientation of each of the motion sensors. Visible light LEDs may be used instead of the infrared LEDs, and in this case light from the visible light LEDs may be sensed by visible light cameras to detect the position and the orientation of each of the motion sensors. As described above, a light emitting unit (for example, the infrared LED or visible light LED) may be provided in each of the plurality of motion sensors attached to the actor, and a light receiving unit (for example, the infrared camera or visible light camera) provided in the studio room R senses the light from the light emitting unit to detect the position and the orientation of each of the motion sensors.
- In one embodiment, a plurality of reflective markers may be used instead of the
motion sensors 31 a-31 f and the motion sensors 32 a-32 f. The reflective markers may be attached to the actor A1 and the actor A2 using an adhesive tape or the like. The position and orientation of each reflective marker can be estimated by capturing images of the actor A1 and the actor A2 to which the reflective markers are attached to generate captured image data and performing image processing on the captured image data. - The
controller 33 a and thecontroller 33 b supply, to theserver device 20, control signals that correspond to operation of the actor A1. Similarly, thecontroller 34 a and thecontroller 34 b supply, to theserver device 20, control signals that correspond to operation of the actor A2. - The tracking
sensor 36 a and the trackingsensor 36 b generate tracking information for determining configuration information of a virtual camera used for constructing a virtual space included in the video. The tracking information of the trackingsensor 36 a and the trackingsensor 36 b is calculated as the position in its three-dimensional orthogonal coordinate system and the angle around each axis. The position and orientation of the trackingsensor 36 a may be changed according to operation of the operator. The trackingsensor 36 a transmits the tracking information indicating the position and the orientation of the trackingsensor 36 a to the trackinginformation server device 20. Similarly, the position and the orientation of the trackingsensor 36 b may be set according to operation of the operator. The trackingsensor 36 b transmits the tracking information indicating the position and the orientation of the trackingsensor 36 b to the trackinginformation server device 20. The trackingsensors sensors - The
camera 37 a is attached to the head of the actor A1 as described above. For example, thecamera 37 a is disposed so as to capture an image of the face of the actor A1. Thecamera 37 a continuously captures images of the face of the actor A1 to obtain imaging data of the face of the actor A1. Similarly, thecamera 38 a is attached to the head of the actor A2. Thecamera 38 a is disposed so as to capture an image of the face of the actor A2 and continuously capture images of the face of the actor A2 to obtain captured image data of the face of the actor A2. Thecamera 37 a transmits the captured image data of the face of the actor A1 to theserver device 20, and thecamera 38 a transmits the captured image data of the face of the actor A2 to theserver device 20. Thecamera 37 a and thecamera 38 a may be 3D cameras capable of detecting the depth of a face of a person. - The
display 39 is configured to display information received from thesupporter computer 40. The information transmitted from thesupporter computer 40 to thedisplay 39 may include, for example, text information, image information, and various other information. Thedisplay 39 is disposed at a position where the actor A1 and the actor A2 are able to see thedisplay 39. Thedisplay 39 is an example of a first display device. - In the illustrated embodiment, the
supporter computer 40 is installed in the next room of the studio room R. Since the room in which thesupporter computer 40 is installed and the studio room R are separated by the glass window, an operator of the supporter computer 40 (sometimes referred to as “supporter” in the specification) is able to see the actor A1 and the actor A2. In the illustrated embodiment, supporters B1 and B2 are present in the room as the operators of thesupporter computer 40. In the specification, the supporter B1 and the supporter B2 may be collectively referred to as the “supporter” when it is not necessary to distinguish them from each other. - The
supporter computer 40 may be configured to be capable of changing the setting(s) of the component(s) of thestudio unit 30 according to the operation by the supporter B1 and the supporter B2. Thesupporter computer 40 can change, for example, the setting of the scanning interval performed by thebase station 35 a and thebase station 35 b, the position or orientation of the trackingsensor 36 a and the trackingsensor 36 b, and various settings of other devices. At least one of the supporter B1 and the supporter B2 is able to input a message to thesupporter computer 40, and the input message is displayed on thedisplay 39. - The components and functions of the
studio unit 30 shown inFIG. 2 are merely example. Thestudio unit 30 applicable to the invention may include various constituent elements that are not shown. For example, thestudio unit 30 may include a projector. The projector is able to project a video distributed to theclient device 10 a or another client device on the screen S. - Next, information stored in the
storage 23 in one embodiment will be described. In the illustrated embodiment, thestorage 23stores model data 23 a,object data 23 b, apossession list 23 c, acandidate list 23 d, and any other information required for generation and distribution of a video to be distributed. - The
model data 23 a is model data for generating animation of a character. Themodel data 23 a may be three-dimensional model data for generating three-dimensional animation, or may be two-dimensional model data for generating two-dimensional animation. Themodel data 23 a includes, for example, rig data (also referred to as “skeleton data”) indicating a skeleton of a character, and surface data indicating the shape or texture of a surface of the character. Themodel data 23 a may include two or more different pieces of model data. Pieces of model data may each have different rig data, or may have the same rig data. The pieces of model data may have surface data different from each other or may have the same surface data. In the illustrated embodiment, in order to generate a character object corresponding to the actor A1 and a character object corresponding to the actor A2, themodel data 23 a includes at least two types of model data different from each other. The model data for the character object corresponding to the actor A1 and the model data for the character object corresponding to the actor A2 may have, for example, the same rig data but different surface data from each other. - The
object data 23 b includes asset data used for constructing a virtual space in the video. Theobject data 23 b includes data for rendering a background of the virtual space in the video, data for rendering various objects displayed in the video, and data for rendering any other objects displayed in the video. Theobject data 23 a may include object position information indicating the position of an object in the virtual space. - In addition to the above, the
object data 23 b may include a gift object displayed in the video in response to a display request from viewing users of theclient devices 10 a to 10 c. The gift object may include an effect object, a normal object, and a decorative object. Viewing users are able to purchase a desired gift object. Moreover, an upper limit may be set for the number of objects that a viewing user is allowed to purchase or the amount of money that the viewing user is allowed to spend for objects. - The effect object is an object that affects the impression of the entire viewing screen of the distributed video, and is, for example, an object representing confetti. The object representing confetti may be displayed on the entire viewing screen, which can change the impression of the entire viewing screen before and after its display. The effect object may be displayed so as to overlap with the character object, but it is different from the decorative object in that it is not displayed in association with a specific portion of the character object.
- The normal object is an object functioning as a digital gift from a viewing user (for example, the actor A1 or the actor A2) to an actor, for example, an object resembling a stuffed toy or a bouquet. In one embodiment, the normal object is displayed on the display screen of the video such that it does not contact the character object. In one embodiment, the normal object is displayed on the display screen of the video such that it does not overlap with the character object. The normal object may be displayed in the virtual space such that it overlaps with an object other than the character object. The normal object may be displayed so as to overlap with the character object, but it is different from the decorative object in that it is not displayed in association with a specific portion of the character object. In one embodiment, when the normal object is displayed such that it overlaps with the character object, the normal object may hide portions of the character object other than the head including the face of the character object but does not hide the head of the character object.
- The decorative object is an object displayed on the display screen in association with a specific part of the character object. In one embodiment, the decorative object displayed on the display screen in association with a specific part of the character object is displayed adjacent to the specific part of the character object on the display screen. In one embodiment, the decorative object displayed on the display screen in association with a specific part of the character object is displayed such that it partially or entirely covers the specific part of the character object on the display screen.
- The decorative object is an object that can be attached to a character object, for example, an accessory (such as a headband, a necklace, an earring, etc.), clothes (such as a T-shirt), a costume, and any other object which can be attached to the character object. The
object data 23 b corresponding to the decorative object may include attachment position information indicating which part of the character object the decorative object is associated with. The attachment position information of a decorative object may indicate to which part of the character object the decorative object is attached. For example, when the decorative object is a headband, the attachment position information of the decorative object may indicate that the decorative object is attached to the “head” of the character object. When the decorative object is a T-shirt, the attachment position information of the decorative object may indicate that the decorative object is attached to the “torso” of the character object. - A duration of time of displaying the gift objects may be set for each gift object depending on its type. In one embodiment, the duration of displaying the decorative object may be set longer than the duration of displaying the effect object and the duration of displaying the normal object. For example, the duration of displaying the decorative object may be set to 60 seconds, while the duration of displaying the effect object may be set to five seconds and the duration of displaying the normal object may be set to ten seconds.
- The
possession list 23 c is a list showing gift objects possessed by viewing users of a video. An example of thepossession list 23 c is shown inFIG. 3 . As illustrated, in thepossession list 23 c, an object ID for identifying a gift object possessed by a viewing user is stored in association with account information of the viewing user (for example, user ID of the viewing user). The viewing users include, for example, the first to third viewing users of theclient devices 10 a to 10 c. - The
candidate list 23 d is a list of decorative objects for which a display request has been made from a viewing user. As will be described later, a viewing user who holds a decorative object(s) is able to make a request to display his/her possessed decorative objects. In thecandidate list 23 d, object IDs for identifying decorative objects are stored in association with the account information of the viewing user who has made a request to display the decorative objects. Thecandidate list 23 d may be created for each distributor. Thecandidate list 23 d may be stored, for example, in association with distributor identification information that identify a distributor(s) (the actor A1, the actor A2, the supporter B1, and/or the supporter B2). - Functions realized by the
computer processor 21 will be now described more specifically. Thecomputer processor 21 functions as a body motiondata generation unit 21 a, a face motiondata generation unit 21 b, ananimation generation unit 21 c, avideo generation unit 21 d, avideo distribution unit 21 e, a displayrequest processing unit 21 f, a decorativeobject selection unit 21 g, and an objectpurchase processing unit 21 h by executing computer-readable instructions included in a distributed program. At least some of the functions that can be realized by thecomputer processor 21 may be realized by a computer processor other than thecomputer processor 21 of thevideo distribution system 1. For example, at least some of the functions realized by thecomputer processor 21 may be realized by a computer processor mounted on thesupporter computer 40. - The body motion
data generation unit 21 a generates first body motion data of each part of the body of the actor A1 based on the tracking information obtained through detection performed by the correspondingmotion sensors 31 a to 31 f, and generates second body motion data, which is a digital representation of the position and the orientation of each part of the body of the actor A2, based on the tracking information obtained through detection performed by the correspondingmotion sensors 32 a to 32 f. In the specification, the first body motion data and the second body motion data may be collectively referred to simply as “body motion data.” The body motion data is serially generated with time as needed. For example, the body motion data may be generated at predetermined sampling time intervals. Thus, the body motion data can represent body movements of the actor A1 and the actor A2 in time series as digital data. In the illustrated embodiment, themotion sensors 31 a to 31 f and themotion sensors 32 a to 32 f are attached to the left and right limbs, the waist, and the head of the actor A1 and the actor A2, respectively. Based on the tracking information obtained through detection performed by themotion sensors 31 a to 31 f and themotion sensors 32 a to 32 f, it is possible to digitally represent the position and orientation of the substantially whole body of the actor A1 and the actor A2 in time series. The body motion data can define, for example, the position and rotation angle of bones corresponding to the rig data included in themodel data 23 a. - The face motion
data generation unit 21 b generates first face motion data, which is a digital representation of motions of the face of the actor A1, based on captured image data of thecamera 37 a, and generates second face motion data, which is a digital representation of motions of the face of the actor A2, based on captured image data of thecamera 38 a. In the specification, the first face motion data and the second face motion data may be collectively referred to simply as “face motion data.” The face motion data is serially generated with time as needed. For example, the face motion data may be generated at predetermined sampling time intervals. Thus, the face motion data can digitally represent facial motions (changes in facial expression) of the actor A1 and the actor A2 in time series. - The
animation generation unit 21 c is configured to apply the body motion data generated by the body motiondata generation unit 21 a and the face motion data generated by the face motiondata generation unit 21 b to predetermined model data included in themodel data 23 a in order to generate an animation of a character object that moves in a virtual space and whose facial expression changes. More specifically, theanimation generation unit 21 c may generate an animation of a character object moving in synchronization with the motions of the body and facial expression of the actor A1 based on the first body motion data and the first face motion data related to the actor A1, and generate an animation of a character object moving in synchronization with the motions of the body and facial expression of the actor A2 based on the second body motion data and the second face motion data related to the actor A2. In the specification, a character object generated based on the motion and expression of the actor A1 may be referred to as a “first character object”, and a character object generated based on the motion and expression of the actor A2 may be referred to as a “second character object.” - The
video generation unit 21 d constructs a virtual space using theobject data 23 b, and generates a video that includes the virtual space, the animation of the first character object corresponding to the actor A1, and the animation of the second character object corresponding to the actor A2. The first character object is disposed in the virtual space so as to correspond to the position of the actor A1 with respect to the trackingsensor 36 a, and the second character object is disposed in the virtual space so as to corresponds to the position of the actor A2 with respect to the trackingsensor 36 a. Thus, it is possible to change the position and the orientation of the first character object and the second character object in the virtual space by changing the position or the orientation of the trackingsensor 36 a. - In one embodiment, the
video generation unit 21 d constructs a virtual space based on tracking information of the trackingsensor 36 a. For example, thevideo generation unit 21 d determines configuration information (the position in the virtual space, a gaze position, a gazing direction, and the angle of view) of the virtual camera based on the tracking information of the trackingsensor 36 a. Moreover, thevideo generation unit 21 d determines a rendering area in the entire virtual space based on the configuration information of the virtual camera and generates moving image information for displaying the rendering area in the virtual space. - The
video generation unit 21 d may be configured to determine the position and the orientation of the first character object and the second character object in the virtual space, and the configuration information of the virtual camera based on tracking information of the trackingsensor 36 b instead of or in addition to the tracking information of the trackingsensor 36 a. - The
video generation unit 21 d is able to include voices of the actor A1 and the actor A2 collected by the microphone in thestudio unit 30 with the generated moving image. - As described above, the
video generation unit 21 d generates an animation of the first character object moving in synchronization with the movement of the body and facial expression of the actor A1, and an animation of the second character moving in synchronization with the movement of the body and facial expression of the actor A2. Thevideo generation unit 21 d then includes the voices of the actor A1 and the actor A2 with the animations respectively to generate a video for distribution. - The
video distribution unit 21 e distributes the video generated by thevideo generation unit 21 d. The video is distributed to theclient devices 10 a to 10 c and other client devices over thenetwork 50. Thevideo distribution unit 21 e refers to the list of users who have requested distribution of the video (a distribution destination list), and distributes the video to the client devices of the users included in the list. As will be described later, there may be a case where distribution of the video to a specific user is prohibited by an instruction from thesupporter computer 40. In this case, the video is distributed to users other than the user to whom distribution of the video is prohibited. The received video is reproduced in theclient devices 10 a to 10 c. - The video may be distributed to a client device (not shown) installed in the studio room R, and projected from the client device onto the screen S via a short focus projector. The video may also be distributed to the
supporter computer 40. In this way, the supporter B1 and the supporter B2 can check the viewing screen of the distributed video. - An example of the screen on which the video distributed from the
server device 20 to theclient device 10 a and reproduced by theclient device 10 a is displayed is illustrated inFIG. 5 . As shown inFIG. 7 , adisplay image 70 of the video distributed from theserver device 20 is displayed on the display of theclient device 10 a. Thedisplay image 70 displayed on theclient device 10 a includes acharacter object 71A corresponding to the actor A1, acharacter object 71B corresponding to the actor A2, a table object 72 a representing a table, in a virtual space. Theobject 72 is not a gift object, but is one of objects used for constructing a virtual space included in theobject data 23 b. Thecharacter object 71A is generated by applying the first body motion data and the first face motion data of the actor A1 to the model data for the actor A1 included in themodel data 23 a. The character object 71A is motion controlled based on the first body motion data and the first face motion data. Thecharacter object 71B is generated by applying the second body motion data and the second face motion data of the actor A2 to the model data for the actor A2 included in themodel data 23 a. Thecharacter object 71B is motion controlled based on the second body motion data and the second face motion data. Thus, thecharacter object 71A is controlled to move in the screen in synchronization with the motions of the body and facial expression of the actor A1, and thecharacter object 71B is controlled to move in the screen in synchronization with the motions of the body and facial expression of the actor A2. - As described above, the video from the
server device 20 may be distributed to thesupporter computer 40. The video distributed to thesupporter computer 40 is displayed on thesupporter computer 40 in the same manner asFIG. 5 . The supporter B1 and the supporter B2 are able to change the configurations of the components of thestudio unit 30 while viewing the video reproduced by thesupporter computer 40. In one embodiment, when the supporter B1 and the supporter B2 wish to change the angle of thecharacter object 71A and thecharacter object 71B in the live distributed video, they can cause an instruction signal to change the orientation of the trackingsensor 36 a to be sent from thesupporter computer 40 to the trackingsensor 36 a. The trackingsensor 36 a is able to change its orientation in accordance with the instruction signal. For example, the trackingsensor 36 a may be rotatably attached to a stand via a pivoting mechanism that includes an actuator disposed around the axis of the stand. When the trackingsensor 36 a received an instruction signal instructing to change its orientation, the actuator of the pivoting mechanism may be driven based on the instruction signal, and the trackingsensor 36 a may be turned by an angle according to the instruction signal. In one embodiment, the supporter B1 and the supporter B2 may cause thesupporter computer 40 to transmit an instruction for using the tracking information of the trackingsensor 36 b to the trackingsensor 36 a and the trackingsensor 36 b, instead of the tracking information from the trackingsensor 36 a. The trackingsensor 36 a and the trackingsensor 36 b may be configured and installed so as to be movable the actor or supporter. As a result, the actor or supporter can hold and move the trackingsensor 36 a and the trackingsensor 36 b. - In one embodiment, when the supporter B1 and the supporter B2 determine that some instructions are needed for the actor A1 or the actor A2 as they viewing the video reproduced on the
supporter computer 40, they may input a message indicating the instruction(s) into thesupporter computer 40 and the message may be output to thedisplay 39. For example, the supporter B1 and the supporter B2 can instruct the actor A1 or the actor A2 to change his/her standing position through the message displayed on thedisplay 39. - The display
request processing unit 21 f receives a display request to display a gift object from a client device of a viewing user, and performs processing according to the display request. Each viewing user is able to transmit a display request to display a gift object to theserver device 20 by operating his/her client device. For example, the first viewing user can transmit a display request to display a gift object to theserver device 20 by operating theclient device 10 a. The display request to display a gift object may include the user ID of the viewing user and the identification information (object ID) that identifies the object for which the display request is made. - As described above, the gift object may include the effect object, the normal object, and the decorative object. The effect object and the normal object are examples of the first object. In addition, a display request for requesting display of the effect object or the normal object is an example of a second display request.
- In one embodiment, when the display
request processing unit 21 f received a display request to display a specific effect object from a viewing user, the displayrequest processing unit 21 f performs a process, in response to the display request, to display the effect object for which the display request is made in thedisplay image 70 of the video. For example, when a display request to display an effect object simulating confetti is made, the displayrequest processing unit 21 f displays in thedisplay image 70 aneffect object 73 simulating confetti based on the display request as shown inFIG. 6 . - In one embodiment, when the display
request processing unit 21 f received a display request to display a specific normal object from a viewing user, the displayrequest processing unit 21 f performs a process, in response to the display request, to display the normal object for which the display request is made in thevideo 70. For example, when a display request to display a normal object simulating a stuffed bear is made, the displayrequest processing unit 21 f displays anormal object 74 simulating a stuffed bear in thedisplay image 70 based on the display request as shown inFIG. 6 . - The display request for the
normal object 74 may include a display position specifying parameter for specifying the display position of thenormal object 74 in the virtual space. In this case, the displayrequest processing unit 21 f displays thenormal object 74 at the position specified by the display position specifying parameter in the virtual space. For example, the display position specifying parameter may specify the upper position of the table object 72 a representing a table as the display position of thenormal object 74. A viewing user is able to specify the position where the normal object is to be displayed by using the display position specifying parameter while watching the layouts of thecharacter object 71A, thecharacter object 71B, the gift object, and other objects included in thevideo 70. - In one embodiment, the
normal object 74 may be displayed such that it moves within thedisplay image 70 of the video. For example, thenormal object 74 may be displayed such that it falls from the top to the bottom of the screen. In this case, thenormal object 74 may be displayed in thedisplay image 70 during the fall, which is from when the object starts to fall and to when the object has fallen to the floor of the virtual space of thevideo 70, and may disappear from thedisplay image 70 after it has fallen to the floor. A viewing user can view the fallingnormal object 74 from the start of the fall to the end of the fall. The moving direction of thenormal object 74 in the screen can be specified as desired. For example, thenormal object 74 may be displayed in thedisplay image 70 so as to move from the left to the right, the right to the left, the upper left to the lower left, or any other direction of thevideo 70. Thenormal object 74 may move on various paths. For example, thenormal object 74 can move on a linear path, a circular path, an elliptical path, a spiral path, or any other paths. The viewing user may include, in the display request to display the normal object, a moving direction parameter that specifies the moving direction of thenormal object 74 and/or a path parameter that specifies the path on which thenormal object 74 moves, in addition to or in place of the display position specifying parameter. In one embodiment, among the effect objects and the normal objects, those whose size in the virtual space is smaller than a reference size (for example, a piece of paper of confetti of the effect object 73) may be displayed such that a part or all of the object(s) is overlapped with thecharacter object 71A and/or thecharacter object 71B. In one embodiment, among the effect objects and the normal objects, those whose size in the virtual space is larger than the reference size (for example, the normal object 74 (the stuffed bear)) may be displayed at a position where the object is not overlapped with the character object. In one embodiment, among the effect objects and the normal objects, if those whose size in the virtual space is larger than the reference size (for example, the normal object 74 (the stuffed bear)) is overlapped with thecharacter object 71A and/or thecharacter object 71B, the object is displayed behind the overlapping character object. - In one embodiment, when the display
request processing unit 21 f received a display request to display a specific decorative object from a viewing user, the displayrequest processing unit 21 f adds the decorative object for which the display request is made to thecandidate list 23 d based on the display request. The display request to display the decorative object is an example of a first display request. For example, the displayrequest processing unit 21 f may store identification information (object ID) identifying the specific decorative object for which the display request has been made from the viewing user in thecandidate list 23 d in association with the user ID of the viewing user (seeFIG. 4 ). When more than one display request to display a decorative object is made, for each of the display requests, the user ID of the viewing user who made the display request and the decorative object ID of the decoration object for which the display request is made by the viewing user are associated with each other and stored in thecandidate list 23 d. - In one embodiment, in response to one or more of the decorative objects included in the
candidate list 23 d being selected, the decorativeobject selection unit 21 g performs a process to display the selected decorative object in thedisplay image 70 of the video. In the specification, a decorative object selected from thecandidate list 23 d may be referred to as a “selected decorative object.” - The selection of the decoration object from the
candidate list 23 d is made, for example, by the supporter B1 and/or the supporter B2 who operate thesupporter computer 40. In one embodiment, thesupporter computer 40 displays a decorative object selection screen.FIG. 8 shows an example of a decorativeobject selection screen 80 in one embodiment. The decorativeobject selection screen 80 is displayed, for example, on the display of thesupporter computer 40. The decorativeobject selection screen 80 shows, for example, each of the plurality of decoration objects included in thecandidate list 23 d in a tabular form. As illustrated, the decorativeobject selection screen 80 in one embodiment includes afirst column 81 showing the type of the decoration object, asecond column 82 showing the image of the decoration object, and athird column 83 showing the body part of a character object associated with the decoration object. Further, on the decorativeobject selection screen 80,selection buttons 84 a to 84 c for selecting each decoration object are displayed. Thus, the decorativeobject selection screen 80 displays decorative objects that can be selected as the selected decorative object. - The supporters B1 and B2 are able to select one or more of the decorative objects shown on the decoration
object selection screen 80. For example, the supporter B1 and the supporter B2 are able to select a headband by selecting theselection button 84 a. When it is detected by the decorativeobject selection unit 21 g that the headband is selected, the displayrequest processing unit 21 f displays the selecteddecorative object 75 that simulates the selected headband on thedisplay screen 70 of the video, as shown inFIG. 7 . The selecteddecorative object 75 is displayed on thedisplay image 70 in association with a specific body part of a character object. The selecteddecorative object 75 may be displayed such that it contacts with the specific body part of the character object. For example, since the selecteddecorative object 75 simulating the headband is associated with the head of the character object, it is attached to the head of thecharacter object 71A as shown inFIG. 7 . The decorative object may be displayed on thedisplay screen 70 such that it moves along with the motion of the specific part of the character object. For example, when the head of thecharacter object 71A with the headband moves, the selecteddecorative object 75 simulating the headband moves in accordance with the motion of the head of thecharacter object 71A as if the headband is attached to the head of thecharacter object 71A. - The selected
decorative object 75 may be displayed on thedisplay screen 70 in association with thecharacter object 71B instead of thecharacter object 71A. Alternatively, the selecteddecorative object 75 may be displayed on thedisplay screen 70 in association with thecharacter object 71A and thecharacter object 71B. - In one embodiment, the decorative
object selection screen 80 may be configured to exclude information identifying a user who holds the decorative object or a user who has made a display request to display the decorative object. By configuring the decorativeobject selection screen 80 in this manner, it is possible to prevent a selector from giving preference to a particular user when selecting a decorative object. - In one embodiment, the decorative
object selection screen 80 may display, for each decorative object, information regarding a user who holds the decorative object or a user who made a display request for the decorative object. Such information displayed for each decorative object may include, for example, the number of times that the user who made this display request of the decorative object has made display requests of the decorative object so far and the number of times that the decorative object has been actually selected (for example, information indicating that the display request to display the decorative object has been made five times and the decorative object has been selected two times among the five times), the number of times that the user views the video of thecharacter object 71A and/or thecharacter object 71B, the number of times that the user views a video (regardless of whether thecharacter object 71A and/or thecharacter object 71B appears in the video or not), the amount of money which the user spent for the gift object, the number of times that the user purchases the object, the points possessed by the user that can be used in thevideo distribution system 1, the level of the user in thevideo distribution system 1, and any other information about the user who made the display request to display the respective decorative object may be displayed. According to this embodiment, it is possible to select the decorative object based on the behavior and/or the viewing history of the user who has made the display request of the decorative object in thevideo distribution system 1. - In one embodiment, a constraint(s) may be imposed on the display of decorative objects to eliminate overlapping. For example, with regard to the
character object 71A, if a decorative object associated with the specific body part of the character object is already selected, selection of other decorative objects associated with the specific body part may be prohibited. As shown in the embodiment ofFIG. 7 , when the headband associated with the “head” of thecharacter object 71B is already selected, the other decorative objects associated with the “head” (for example, a decorative object simulating a “hat” associated with the head) are not displayed on the decorativeobject selection screen 80, or a selection button for selecting the decorative object simulating the hat is made unselectable on decorativeobject selection screen 80. According to this embodiment, it is possible to prevent the decorative object from being displayed so as to overlap with a specific part of the character object. - The decorative
object selection screen 80 may be displayed on another device instead of or in addition to thesupporter computer 40. For example, the decorativeobject selection screen 80 may be displayed on thedisplay 39 and/or the screen S in the studio room R. In this case, the actor A1 and the actor A2 are able to select a desired decorative object based on the decorativeobject selection screen 80 displayed on thedisplay 39 or the screen S. Selection of the decorative object by the actor A1 and the actor A2 maybe made, for example, by operating thecontroller 33 a, thecontroller 33 b, thecontroller 34 a, or thecontroller 34 b. - In one embodiment, in response to a request from a viewing user of the video, the object
purchase processing unit 21 h transmits, to a client device of the viewing user (for example, theclient device 10 a), purchase information of each of the plurality of gift objects that can be purchased in relation to the video. The purchase information of each gift object may include the type of the gift object (the effect object, the normal object, or the decorative object), the image of the gift object, the price of the gift object, and any other information necessary to purchase the gift object. The viewing user is able to select a gift object to purchase it considering the gift object purchase information displayed on theclient device 10 a. The selection of the gift objects which the viewing user purchases may be performed by operating theclient device 10 a. When a gift object to be purchased is selected by the viewing user, a purchase request for the gift object is transmitted to theserver device 20. The objectpurchase processing unit 21 h performs a payment process based on the purchase request. When the payment process is completed, the purchased gift object is held by the viewing user. In this case, the object ID of the purchased gift object is stored in thepossession list 23 c in association with the user ID of the viewing user who purchased the object. - Gift objects that can be purchased may be different for each video. The gift objects may be made purchasable in two or more different videos. That is, the purchasable gift objects may include a gift object unique to each video and a common gift object that can be purchased in the videos. For example, the effect object that simulates confetti may be the common gift object that can be purchased in the two or more different videos.
- In one embodiment, when a user purchases an effect object while viewing a video, the purchased effect object may be displayed automatically in the video that the user is viewing in response to completion of the payment process to purchase the effect object. In the same manner, when a user purchases a normal object while viewing a video, the purchased normal object may be automatically displayed in the video that the user is viewing in response to completion of the payment process to purchase the normal object.
- In another embodiment, in response to completion of the payment process performed by the object
purchase processing unit 21 h for the effect object to be purchased, a notification of the completion of the payment process may be sent to theclient device 10 a, and a confirmation screen may be displayed to confirm whether the viewing user wants to make a display request to display the purchased effect object on theclient device 10 a. When the viewing user selects to make the display request for the purchased effect object, the display request to display the purchased effect object may be sent from the client device of the viewing user to the displayrequest processing unit 21 f, and the displayrequest processing unit 21 f may perform the process to display the purchased effect object in thedisplay image 70 of the video. Even when the normal object is to be purchased, a confirmation screen may be displayed on theclient device 10 a to confirm whether the viewing user wants to make a display request to display the purchased normal object, in the same manner as above. - Next, the
supporter computer 40 in one embodiment will be described. In one embodiment, thesupporter computer 40 includes acomputer processor 41, a communication I/F 42, astorage 43, adisplay 44, and aninput interface 45. - Similarly to the
computer processor 21, thecomputer processor 41 may be any computing device such as a CPU. Similarly to the communication I/F 22, the communication I/F 42 may be, a driver, software, or a combination thereof for communicating with other devices. Similarly to thestorage 23, thestorage 43 may be a storage device capable of storing data such as a magnetic disk. Thedisplay 44 may be a liquid crystal display, an organic EL display, an inorganic EL display or any other display device capable of displaying images. Theinput interface 45 may be any input interface that receives input from the supporter such as a mouse and a keyboard. Thedisplay 44 is an example of a second display device. -
FIG. 9 illustrates an example of an image displayed on thedisplay 44 of thesupporter computer 40. In the illustrated embodiment, adisplay image 46 displayed on thedisplay 44 includes afirst display area 47 a for displaying a display screen of a video in a first type client device (for example, a personal computer), asecond display area 47 b for displaying a display screen of the video in a second type client device (for example, a smart phone), and a plurality ofoperation icons 48. - By monitoring the video displayed in the
first display area 47 a, the supporter is able to check whether the video distributed to the first type client device is normally displayed. Similarly, by monitoring the video displayed in thesecond display area 47 b, the supporter is able to check whether the video distributed to the second type client device is normally displayed. - In the illustrated embodiment, the
operation icon 48 includes afirst operation icon 48 a, asecond operation icon 48 b, athird operation icon 48 c, afourth operation icon 48 d, and afifth operation icon 48 e. The supporter is able to select a desired operation icon via theinput interface 45. When thefirst operation icon 48 a, thesecond operation icon 48 b, thethird operation icon 48 c, thefourth operation icon 48 d, and thefifth operation icon 48 e are selected by the supporter, thesupporter computer 40 receives a first operation input, a second operation input, a third operation input, a fourth operation input, and a fifth operation input, respectively. - Functions realized by the
computer processor 41 will be now described more specifically. Thecomputer processor 41 functions as an additionalinformation display unit 41 a and adistribution management unit 41 b by executing computer-readable instructions included in a predetermined program. At least some of the functions that can be realized by thecomputer processor 41 may be realized by a computer processor other than thecomputer processor 41 of thevideo distribution system 1. At least some of the functions realized by thecomputer processor 41 may be realized by, for example, thecomputer processor 21. - The additional
information display unit 41 a is configured to add various additional information to the video in accordance with various operation inputs from the supporter via theinput interface 45. - In one embodiment, when the
supporter computer 40 receives the first operation input made by selecting thefirst operation icon 48 a, the additionalinformation display unit 41 a is configured to display amessage 101 in the video displayed on thedisplay 39 based on the first operation input, while the additionalinformation display unit 41 a makes themessage 101 undisplayed on the client device. Themessage 101 is an example of first additional information. -
FIG. 10 shows an example of a display image of the video displayed on thedisplay 39. Thedisplay image 100 shown inFIG. 10 includes themessage 101. Thedisplay image 100 is similar to the display image 70 (seeFIG. 5 ) of the video distributed and displayed on the client device of the viewing user at the same time except that thedisplay image 100 includes themessage 101. That is, thedisplay image 100 displayed on thedisplay 39 is configured by adding themessage 101 to the video being distributed. - The
message 101 is, for example, a text message input by a supporter. Themessage 101 may include image information instead of text format information or in addition to the text format information. Themessage 101 may include an instruction on an actor's performance, an instruction on an actor's statement, an instruction on an actor's position, and various other instructions or messages related to the live distributed video. - In one embodiment, when a supporter selects the
first operation icon 48 a on thesupporter computer 40, a window that allows the supporter to input a message is displayed on thedisplay 44 of thesupporter computer 40. The supporter is able to input a message in a message input area of the window by operating theinput interface 45. The inputted message is transmitted from thesupporter computer 40 to thedisplay 39. Thedisplay 39 and theclient devices 10 a to 10 c display the video distributed by thevideo distribution unit 21 e. On thedisplay 39, the message received from thesupporter computer 40 is displayed as amessage 101 in a predetermined area of the video distributed by thevideo distribution unit 21 e as shown inFIG. 10 . Whereas in the video displayed on theclient devices 10 a to 10 c, themessage 101 or information corresponding thereto is not displayed. According to this embodiment, it is possible to communicate instructions and messages regarding the live distributed video to the actor or other distribution staff member who can see thedisplay 39 without affecting the video viewed by viewing users of the client devices. - Subsequently, display of the second additional information will be described with reference to
FIGS. 11 a and 11 b . In one embodiment, once thesecond operation icon 48 b is selected and the second operation input is received by thesupporter computer 40, the additionalinformation display unit 41 a is configured to display an interruption image in the video displayed on thedisplay 39 and the video displayed on the client devices based on the second operation input. The interruption image is an example of the second additional information. The second operation input may be input to theserver device 20 or thesupporter computer 40 in accordance with operation of thecontroller controller -
FIG. 11 a shows an example of adisplay image 110 a displayed on theclient device 10 a when the second operation input is received, andFIG. 11 b shows an example of adisplay image 110 b displayed on thedisplay 39 when the second operation input received. Both thedisplay image 110 a and thedisplay image 110 b include aninterruption image 111. Theinterruption image 111 is disposed in the top layer of the live distributed video. Thus, theinterruption image 111 is displayed on the screen of the client device and thedisplay 39 on which the video is being played. Although there may be a slight visual difference between theinterruption image 111 displayed on the client device and theinterruption image 111 displayed on thedisplay 39 due to a difference in the performance of the display device, both are substantially the same image. - In the illustrated embodiment, the
interruption image 111 is an image that is displayed in emergency instead of displaying a video containing a normal virtual space and a character object when an unexpected situation occurs during a live distribution of the video. The supporter is able to select thesecond operation icon 48 b on thesupporter computer 40, for example, when trouble occurs in motion control of the character object due to a failure of equipment used in the studio room R. The additionalinformation display unit 41 a performs processing for displaying theinterruption image 111 on each client device and thedisplay 39 in response to the selection of thesecond operation icon 48 b. In response to reception of the second operation input, the additionalinformation display unit 41 a may cause theinterruption image 111 to be superimposed on the live-distributed video. In response to reception of the second operation input, the additionalinformation display unit 41 a may transmit a control signal for displaying theinterruption image 111 to the client device. Upon receipt of the control signal, the client device that received the control signal may perform a process to superimpose theinterruption image 111 on the video being played or a process to display theinterruption image 111 instead of the video being played. - When the first operation input is accepted by the selection of the
first operation icon 48 a while the interruptimage 111 is displayed on thedisplay 39, the additionalinformation display unit 41 a causes thedisplay 39 to superimpose on the interruptimage 111 to be a message. 101 may be displayed. - In the above embodiment, when an unexpected situation occurs during a live distribution of a video, the
interruption image 111 that is displayed in emergency is distributed to the client devices and thedisplay 39 instead of continuing to distribute the video. The distributor can handle the situation while theinterruption image 111 is displayed. - Display of the second additional information will be now described with reference to
FIGS. 11 a and 11 b . In one embodiment, when thethird operation icon 48 c is selected and the third operation input is received by thesupporter computer 40, the additionalinformation display unit 41 a is configured to generate a modified character object and performs processing for distribution of a video that contains the modified character object. The third operation input may be input to theserver device 20 or thesupporter computer 40 in accordance with operation of thecontroller controller -
FIG. 12 a shows another example of adisplay image 120 a displayed on theclient device 10 a, andFIG. 12 b shows an example of adisplay image 120 b displayed on theclient device 10 a. Thedisplay image 120 a ofFIG. 12 a includes thecharacter object 71B, and thedisplay image 120 a ofFIG. 12 b includes a modifiedcharacter object 171B which will be described later. As described above, facial expression of thecharacter object 71B is controlled so as to change in synchronization with the change of the facial expression of the actor A2 based on the face motion data of the actor A2. In this specification, all or part of the portion of the character object that changes based on the face motion data may be referred to as a “face portion.” In the illustrated embodiment, motion control is performed such that eyes of thecharacter object 71B moves in synchronization with the eye motion of the actor A2, so that theface portion 121 is set in the position where includes both eyes of thecharacter object 71B. Theface portion 121 may be set to include the entire face of thecharacter object 71B. - In the
display image 120 a shown inFIG. 12 a , the image of the eyes of thecharacter object 71B displayed in theface portion 121 is generated by applying the face motion data to the model data. Whereas the image of the eyes displayed in theface portion 121 in thedisplay image 120 b ofFIG. 12 b is aprepared image 122 b that is prepared in advance before the start of the video distribution to be fitted in theface portion 121. Theprepared image 122 b may be stored in thestorage 23. Theprepared image 122 b is an example of third additional information. - In one embodiment, the additional
information display unit 41 a is configured to display theprepared image 122 b composited in theface portion 121 of thecharacter object 71B when the third operation input is received. For example, when the third operation input is received, the additionalinformation display unit 41 a transmits a modification instruction to theanimation generation unit 21 c, and theanimation generation unit 21 c composites theprepared image 122 b to be fitted in theface portion 121 of thecharacter object 71B in accordance with the modification instruction from the additionalinformation display unit 41 a instead of using the image generated based on the face motion data in order to generate an animation of the modifiedcharacter object 171B. In this specification, a character object into which theprepared image 122 b is inserted, not the image generated based on the face motion data, may be referred to as a modified character object. The modified character object is an object in which a part of the character object generated by theanimation generation unit 21 c is modified. In the above example, the animation of the modifiedcharacter object 171B is generated by compositing theprepared image 122 b to be displayed in theface portion 121 of thecharacter object 71B generated by theanimation generation unit 21 c. In response to reception of the third operation input, the additionalinformation display unit 41 a may transmit a control signal for displaying theprepared image 122 b to the client device. Upon receipt of the control signal, the client device may perform a process to composite theprepared image 122 b to be displayed in the character object in the video being played. - The process of applying the face motion data to the model data to change the facial expression of a character object imposes a high processing load on the processor. For this reason, the facial expression of the character object may fail to follow the facial expression of the actor in a timely manner. Since character's voice and motions other than the facial expression can follow timely to the voice and body motions of the actor movement, if the facial expression of the character object fails to follow the facial expression of the actor in a timely manner, it may give the viewing users a feeling of strangeness. In the above embodiment, when the facial expression of the character object fails to follow the facial expression of the actor timely, the modified character object that incorporates the
prepared image 122 b to be fitted therein is generated, and the video containing the modified character object is distributed. In this way, it is possible to prevent deterioration of the quality of the video caused by the fact that the facial expression of the character object fails to follow the facial expression of the actor. - The
storage 23 may store a plurality of different types of images as candidates of theprepared image 122 b to be composited. The candidates of theprepared image 122 b are displayed on thedisplay 44 of thesupporter computer 40, and an image selected by the supporter from among the candidates may be used as theprepared image 122 b to be composited. - Display of a blind object will be now described with reference to
FIG. 13 . In one embodiment, when thefourth operation icon 48 c is selected and the fourth operation input is received by the additionalinformation display unit 41 a, the additionalinformation display unit 41 a displays a blind object that is used to hide at least a part of the character object that wears the decorative object. The fourth operation input may be input to theserver device 20 or thesupporter computer 40 in accordance with operation of thecontroller controller information display unit 41 a may transmit a control signal for displaying theprepared image 122 b to the client device. Upon receipt of the control signal, the client device that received the control signal may display the blind object such that the blind object is superimposed on the video being played. - In the embodiment of
FIG. 13 , it is assumed that a T-shirt is selected as the decoration object to be attached to thecharacter object 71B in theselection screen 80. The additionalinformation display unit 41 a may display the blind object such that the blind object hides a part of thecharacter object 71B that is associated with the selected decorative object. In thedisplay image 130 ofFIG. 13 , ablind object 131 is displayed such that at least the torso of thecharacter object 71B associated with the T-shirt, which is the selected decorative object, is hidden by theblind object 131. - A process for prohibiting the distribution of a video to a banned user will be described. In one embodiment, when the
fifth operation icon 48 e is selected and a fifth operation input is received by thedistribution management unit 41 b, thedistribution management unit 41 b performs a process for prohibiting distribution of the video to a specific viewing user or a specific client device. In one embodiment, when thefifth operation icon 48 e is selected during distribution of a video, a list of users who are viewing the video is displayed. Thedistribution management unit 41 b performs a process for ceasing the distribution of the video to a user(s) selected by the supporter or the actor from the list. For example, thedistribution management unit 41 b may flag the selected user (distribution banned user) to identify the user in the distribution destination list of the video. Thevideo distribution unit 21 e may distribute the video to users with no flag that is used to identify the banned user, among the users included in the distribution destination list of the video. In this way, the distribution of the video to the banned user is stopped. In another embodiment, thedistribution management unit 41 b may be configured to make a user(s) selected by the supporter or the actor from the list of the users who are viewing the live distributed video inaccessible to some of the functions that are normally available to the users. For example, thedistribution management unit 41 b may be configured to prohibit a user who has posted an inappropriate message from posting of a new message. The user who is prohibited from posting a message is allowed to continue viewing the video even after the prohibition, but is no longer allowed to post a message on the video. - Next, with reference to
FIGS. 14 to 19 , a video distribution process in one embodiment will be described.FIG. 14 is a flow chart showing a flow of a video distribution process in one embodiment,FIG. 15 is a flowchart of a process of displaying the first additional information in one embodiment,FIG. 16 is a flowchart of a process of displaying the second additional information in one embodiment,FIG. 17 is a flowchart of a process of displaying the third additional information in one embodiment,FIG. 18 is a flowchart of a process of displaying the blind object in one embodiment, andFIG. 19 is a flow chart showing a flow of a process for prohibiting the video distribution to a banned user in one embodiment. - First, a video distribution process in one embodiment will be described with reference to
FIG. 14 , In the video distribution process, it is assumed that the actor A1 and the actor A2 are giving performances in the studio room R. First, in step S11, body motion data, which is a digital representation of the body motions of the actor A1 and the actor A2, and face motion data, which is a digital representation of the facial motions (expression) of the actor A1 and the actor A2, are generated. Generation of the body motion data is performed, for example, by the body motiondata generation unit 21 a described above, and generation of the face motion data is performed, for example, by the face motiondata generation unit 21 b described above. - Next, in step S12, the body motion data and the face motion data of the actor A1 are applied to the model data for the actor A1 to generate animation of the first character object that moves in synchronization with the motions of the body and facial expression of the actor A1. Similarly, the body motion data and the face motion data of the actor A2 are applied to the model data for the actor A2 to generate animation of the second character object that moves in synchronization with the motions of the body and facial expression of the actor A2. The generation of the animation is performed, for example, by the above-described
animation generation unit 21 c. - Next, in step S13, a video including the animation of the first character object corresponding to the actor A1 and the animation of the second character object corresponding to the actor A2 is generated. The voices of the actor A1 and the actor A2 may be included in the video. The animation of the first character object and the animation of the second character object may be provided in the virtual space. Generation of the video is performed, for example, by the above-described
video generation unit 21 d. - Next, the process proceeds to step S14 and the video generated in step S13 is distributed. The video is distributed to the
client devices 10 a to 10 c and other client devices over thenetwork 50. The video may be distributed to thesupporter computer 40 and/or may be projected on the screen S in the studio room R. The video is distributed continuously over a predetermined distribution period. The distribution period of the video may be set to, for example, 30 seconds, 1 minute, 5 minutes, 10 minutes, 30 minutes, 60 minutes, 120 minutes, and any other length of time. - Subsequently in step S15, it is determined whether a termination condition for ending the distribution of the video is satisfied. The end condition is, for example, that the distribution ending time has come, that the
supporter computer 40 has issued an instruction to end the distribution, or any other conditions. If the end condition is not satisfied, the steps S11 to S14 of the process are repeatedly executed, and distribution of the video including the animation synchronized with the motions of the actor A1 and the actor A2 is continued. When it is determined that the end condition is satisfied for the video, the distribution process of the video is ended. - Next, with reference to
FIG. 15 , a description is given of a process of displaying the first additional information that is performed during a video live-distribution. In step S21, it is determined whether the first operation input has been made during the video live-distribution. For example, when thefirst operation icon 48 a is selected on thesupporter computer 40, it is determined that the first operation input has been made. When the first operation input has been made, the display process of the first additional information proceeds to step S22. In step S22, themessage 101, which is an example of the first additional information, is shown on the video displayed on thedisplay 39, and the display process to display the first additional information ends. Themessage 101 is not displayed on the client device at this time. - With reference to
FIG. 16 , a process of displaying the second additional information during a video live-distribution will be now described. In step S31, it is determined whether the second operation input has been made during the video live-distribution. For example, when thesecond operation icon 48 b is selected on thesupporter computer 40, it is determined that the second operation input has been made. When the second operation input has been made, the display process of the second additional information proceeds to step S32. In step S32, theinterruption image 111, which is an example of the second additional information, is displayed on the client device(s) and thedisplay 39, and the display process to display the second additional information ends. - With reference to
FIG. 17 , a process of displaying the third additional information during a video live-distribution will be now described. In step S41, it is determined whether the third operation input has been made during the video live-distribution. For example, when thethird operation icon 48 c is selected on thesupporter computer 40, it is determined that the third operation input has been made. When the third operation input has been made, the display process to display the third additional information proceeds to step S42. In step S42, theprepared image 122 b, which is an example of the third additional information, is displayed in the face portion of the character object, and the display process to display the third additional information ends. - Next, with reference to
FIG. 18 , a description is given of a display process to display the blind object during a video live-distribution. In step S51, it is determined whether selection of a decorative object that is to be attached to the character object appearing in the video has been made during the video live-distribution. As described above, the decorative object attached to the character object may be selected from among the candidates in thecandidate list 23 d. The process to select the decorative object is performed, for example, by the above-mentioned decorativeobject selection unit 21 g. - When it is determined in step S51 that the decorative object has been selected, the process proceeds to step S52. In step S52, it is determined whether the fourth operation input has been made. For example, when the
fourth operation icon 48 d is selected on thesupporter computer 40, it is determined that the fourth operation input has been made. When the fourth operation input has been made, the display process to display the blind object proceeds to step S53. In another embodiment, when the decorative object is selected from thecandidate list 23 d, it may be determined that the fourth operation input has been made. In this case, step S52 is omitted, and the display process to display the blind object proceeds from step S51 to step S53. - In step S53, the
blind object 131 is displayed such that it hides at least a part of the character object. For example, when the decorative object selected in step S51 is a T-shirt, theblind object 131 is added to the video so as to hide the torso of the character object where is associated with the T-shirt. - With reference to
FIG. 19 , the process for prohibiting the distribution of a video to a specific user will be further described. In step S61, it is determined whether the fifth operation input has been made during a video live-distribution. For example, when thefifth operation icon 48 e is selected on thesupporter computer 40, it is determined that the fifth operation input has been made. - When the fifth operation input has been made, the distribution prohibition process proceeds to step S62. In step S62, a user (banned user) to whom the video is prohibited from being distributed is designated. For example, a list of users who are watching the video is displayed on the
display 44 of thesupporter computer 40, and a user selected by the supporter from this list is designated as the banned user. The process for designating the banned user is performed by, for example, thedistribution management unit 41 b. - Next, the distribution prohibition process proceeds to step S63. In step S63, the video is distributed only to users who are not designated as the banned users among the users included in the distribution destination list of the video. The video distribution process is performed, for example, by the above-described
video distribution unit 21 e. - With the
video distribution system 1 in the above embodiment, it is easy to handle with an unexpected situation occurred during a live distribution. For example, by displaying themessage 101 on thedisplay 39 without displaying themessage 101 on the client device(s), it is possible to communicate instructions and messages regarding the live distributed video to the actor or other distribution staff member who can see thedisplay 39 without affecting the video viewed by viewing users. - In the above-described another embodiment, when an unexpected situation occurs during a live distribution of a video, the
interruption image 111 displayed in emergency is distributed to the client devices and thedisplay 39 instead of continuing to distribute the video. - In the above-described yet another embodiment, when the facial expression of the character object fails to follow the facial expression of the actor timely, the modified character object that incorporates the
prepared image 122 b to be fitted therein is generated, and the video containing the modified character object is distributed. In this way, it is possible to prevent deterioration of the quality of the video caused by the fact that the facial expression of the character object fails to follow the facial expression of the actor. - In the above-described still yet another embodiment, even when some viewing users take inappropriate actions such as posting inappropriate messages, it is possible to stop the distribution of the video to such viewing users.
- Embodiments of the invention are not limited to the above-described embodiments, and various modifications can be made without departing from the scope of the invention. For example, shooting and production of the video to be distributed may be performed outside the studio room R. For example, video shooting to generate a video to be distributed may be performed at an actor's or supporter's home.
- The procedures described herein, particularly those described with a flowchart, are susceptible of omission of part of the steps constituting the procedure, adding steps not explicitly included in the steps constituting the procedure, and/or reordering the steps. The procedure subjected to such omission, addition, or reordering is also included in the scope of the present invention unless diverged from the purport of the present invention.
Claims (18)
1. A video distribution system comprising one or more processors, wherein the one or more processors is configured to:
live-stream a video containing an animation of a character object generated based on motions of an actor to a first client device used by a first user; and
cause an interruption image to be displayed in an upper layer of the video based on a first operation input from the actor.
2. The video distribution system of claim 1 , wherein the one or more processors is further configured to:
cause a modified character object formed by modifying at least a part of the character object to be generated based on a second operation input; and
cause the modified character object to be contained in the video.
3. The video distribution system of claim 2 ,
wherein the character object has a face portion that is generated so as to move based on face motion data representing face motions of the actor, and
wherein the one or more processors is further configured to display an additional information in the face portion based on the second operation input.
4. The video distribution system of claim 1 , further comprising:
a storage storing a decorative object displayed in association with the character object,
wherein the one or more processors is further configured to display a blind object for hiding at least a part of the character object when the decorative object is displayed in the video based on a third operation input.
5. The video distribution system of claim 1 , wherein the one or more processors is further configured to prohibit distribution of the video to the first user based on a fourth operation input.
6. The video distribution system of claim 5 , wherein the distribution of the video to the first user is prohibited upon determination that the first user has posted an inappropriate message.
7. The video distribution system of claim 1 , wherein the video is live-streamed from a distribution server.
8. The video distribution system of claim 1 , wherein the video is displayed on a first display device disposed at a position viewable by the actor.
9. The video distribution system of claim 1 , wherein the interruption image is displayed by a supporter computer.
10. A video distribution method performed by one or more computer processors executing computer-readable instructions, the video distribution method comprising:
live-streaming a video containing an animation of a character object generated based on motions of an actor to a first client device used by a first user;
displaying the video on a first display device disposed at a position viewable by the actor; and
causing an interruption image to be displayed in an upper layer of the video based on a second operation input from the actor.
11. A video distribution system comprising one or more processors, wherein the one or more processors is configured to:
live-stream a video containing an animation of a character object generated based on motions of an actor to a first client device used by a first user; and
prohibit the first user from posting a message on the video based on a first operation input.
12. The video distribution system of claim 11 , wherein the one or more processors is further configured to:
display a distribution destination list indicating users viewing the video, in response to the first operation input, and
prohibit. Upon selection of the first user from the distribution destination list, the first user from posting a message on the video.
13. The video distribution system of claim 11 , wherein the first user is prohibited from posting the message on the video upon determination that the first user has posted an inappropriate message.
14. The video distribution system of claim 11 , wherein the first user is allowed to view the video after the first user is prohibited from posting a message on the video.
15. The video distribution system of claim 11 , wherein the one or more processors is further configured to cause a modified character object formed by modifying at least a part of the character object to be generated based on a second operation input and cause the modified character object to be contained in the video.
16. The video distribution system of claim 11 ,
wherein the character object has a face portion that is generated so as to move based on face motion data representing face motions of the actor, and
wherein the one or more processors is further configured to display an additional information in the face portion based on a third operation input.
17. The video distribution system of claim 8 , further comprising:
a storage storing a decorative object displayed in association with the character object,
wherein the one or more processors is further configured to display a blind object for hiding at least a part of the character object when the decorative object is displayed in the video based on a fourth operation input.
18. A video distribution method performed by one or more computer processors executing computer-readable instructions, the video distribution method comprising:
live-streaming a video containing an animation of a character object generated based on motions of an actor to a first client device used by a first user; and
prohibiting the first user from posting a message on the video based on a first operation input.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
US18/457,056 US20230412897A1 (en) | 2018-05-09 | 2023-08-28 | Video distribution system for live distributing video containing animation of character object generated based on motion of actors |
Applications Claiming Priority (7)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
JP2018090907A JP6446154B1 (en) | 2018-05-09 | 2018-05-09 | Video distribution system for live distribution of animation including animation of character objects generated based on actor movement |
JP2018-090907 | 2018-05-09 | ||
JP2018-224331 | 2018-11-30 | ||
JP2018224331A JP6548802B1 (en) | 2018-11-30 | 2018-11-30 | A video distribution system that delivers live video including animation of character objects generated based on the movement of actors |
US16/407,733 US11128932B2 (en) | 2018-05-09 | 2019-05-09 | Video distribution system for live distributing video containing animation of character object generated based on motion of actors |
US17/405,599 US11778283B2 (en) | 2018-05-09 | 2021-08-18 | Video distribution system for live distributing video containing animation of character object generated based on motion of actors |
US18/457,056 US20230412897A1 (en) | 2018-05-09 | 2023-08-28 | Video distribution system for live distributing video containing animation of character object generated based on motion of actors |
Related Parent Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
US17/405,599 Continuation US11778283B2 (en) | 2018-05-09 | 2021-08-18 | Video distribution system for live distributing video containing animation of character object generated based on motion of actors |
Publications (1)
Publication Number | Publication Date |
---|---|
US20230412897A1 true US20230412897A1 (en) | 2023-12-21 |
Family
ID=68464387
Family Applications (3)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
US16/407,733 Active US11128932B2 (en) | 2018-05-09 | 2019-05-09 | Video distribution system for live distributing video containing animation of character object generated based on motion of actors |
US17/405,599 Active 2039-06-20 US11778283B2 (en) | 2018-05-09 | 2021-08-18 | Video distribution system for live distributing video containing animation of character object generated based on motion of actors |
US18/457,056 Pending US20230412897A1 (en) | 2018-05-09 | 2023-08-28 | Video distribution system for live distributing video containing animation of character object generated based on motion of actors |
Family Applications Before (2)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
US16/407,733 Active US11128932B2 (en) | 2018-05-09 | 2019-05-09 | Video distribution system for live distributing video containing animation of character object generated based on motion of actors |
US17/405,599 Active 2039-06-20 US11778283B2 (en) | 2018-05-09 | 2021-08-18 | Video distribution system for live distributing video containing animation of character object generated based on motion of actors |
Country Status (1)
Country | Link |
---|---|
US (3) | US11128932B2 (en) |
Families Citing this family (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US10861223B2 (en) * | 2018-11-06 | 2020-12-08 | Facebook Technologies, Llc | Passthrough visualization |
JP6735398B1 (en) * | 2019-08-06 | 2020-08-05 | 株式会社 ディー・エヌ・エー | System, method and program for delivering live video |
US10950034B1 (en) | 2020-01-27 | 2021-03-16 | Facebook Technologies, Llc | Systems, methods, and media for generating visualization of physical environment in artificial reality |
CN112291631A (en) * | 2020-10-30 | 2021-01-29 | 北京达佳互联信息技术有限公司 | Information acquisition method, device, terminal and storage medium |
Family Cites Families (61)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US3733857A (en) * | 1968-09-12 | 1973-05-22 | K Kohl | Holding device for the instrumentalities of an automatic warp knitting machine |
JPS63132727A (en) | 1986-11-22 | 1988-06-04 | Zeniya Alum Seisakusho:Kk | Rotary machining device for press |
US5923337A (en) * | 1996-04-23 | 1999-07-13 | Image Link Co., Ltd. | Systems and methods for communicating through computer animated images |
US6141007A (en) * | 1997-04-04 | 2000-10-31 | Avid Technology, Inc. | Newsroom user interface including multiple panel workspaces |
US6070269A (en) * | 1997-07-25 | 2000-06-06 | Medialab Services S.A. | Data-suit for real-time computer animation and virtual reality applications |
JP2001137541A (en) | 1999-11-17 | 2001-05-22 | Square Co Ltd | Method of displaying object, game device and memory medium |
JP4047554B2 (en) | 2001-05-11 | 2008-02-13 | 日本放送協会 | Illumination method, illumination device, display method, display device, and photographing system |
JP2002344755A (en) | 2001-05-11 | 2002-11-29 | Ricoh Co Ltd | Color correction method |
JP2003091345A (en) | 2001-09-18 | 2003-03-28 | Sony Corp | Information processor, guidance presenting method, guidance presenting program and recording medium recording the guidance presenting program |
KR100456962B1 (en) | 2004-08-27 | 2004-11-10 | 엔에이치엔(주) | A method for providing a character incorporated with game item functions, and a system therefor |
US20080222262A1 (en) | 2005-09-30 | 2008-09-11 | Sk C&C Co. Ltd. | Digital Album Service System for Showing Digital Fashion Created by Users and Method for Operating the Same |
US20080052242A1 (en) | 2006-08-23 | 2008-02-28 | Gofigure! Llc | Systems and methods for exchanging graphics between communication devices |
EP1912175A1 (en) * | 2006-10-09 | 2008-04-16 | Muzlach AG | System and method for generating a video signal |
JP4673862B2 (en) | 2007-03-02 | 2011-04-20 | 株式会社ドワンゴ | Comment distribution system, comment distribution server, terminal device, comment distribution method, and program |
US20090019053A1 (en) | 2007-07-13 | 2009-01-15 | Yahoo! Inc. | Method for searching for and marketing fashion garments online |
US20130215116A1 (en) | 2008-03-21 | 2013-08-22 | Dressbot, Inc. | System and Method for Collaborative Shopping, Business and Entertainment |
US20090319601A1 (en) | 2008-06-22 | 2009-12-24 | Frayne Raymond Zvonaric | Systems and methods for providing real-time video comparison |
JP2010033298A (en) | 2008-07-28 | 2010-02-12 | Namco Bandai Games Inc | Program, information storage medium, and image generation system |
KR101671900B1 (en) | 2009-05-08 | 2016-11-03 | 삼성전자주식회사 | System and method for control of object in virtual world and computer-readable recording medium |
US8803889B2 (en) | 2009-05-29 | 2014-08-12 | Microsoft Corporation | Systems and methods for applying animations or motions to a character |
US20110025689A1 (en) | 2009-07-29 | 2011-02-03 | Microsoft Corporation | Auto-Generating A Visual Representation |
US9098873B2 (en) | 2010-04-01 | 2015-08-04 | Microsoft Technology Licensing, Llc | Motion-based interactive shopping environment |
US10805102B2 (en) | 2010-05-21 | 2020-10-13 | Comcast Cable Communications, Llc | Content recommendation system |
JP2012120098A (en) | 2010-12-03 | 2012-06-21 | Linkt Co Ltd | Information provision system |
US9354763B2 (en) | 2011-09-26 | 2016-05-31 | The University Of North Carolina At Charlotte | Multi-modal collaborative web-based video annotation system |
KR20130053466A (en) | 2011-11-14 | 2013-05-24 | 한국전자통신연구원 | Apparatus and method for playing contents to provide an interactive augmented space |
CN102595340A (en) | 2012-03-15 | 2012-07-18 | 浙江大学城市学院 | Method for managing contact person information and system thereof |
JP5962200B2 (en) | 2012-05-21 | 2016-08-03 | カシオ計算機株式会社 | Imaging apparatus and imaging processing method |
US20140013200A1 (en) | 2012-07-09 | 2014-01-09 | Mobitude, LLC, a Delaware LLC | Video comment feed with prioritization |
JP5571269B2 (en) | 2012-07-20 | 2014-08-13 | パナソニック株式会社 | Moving image generation apparatus with comment and moving image generation method with comment |
US20150082203A1 (en) | 2013-07-08 | 2015-03-19 | Truestream Kk | Real-time analytics, collaboration, from multiple video sources |
JP5726987B2 (en) | 2013-11-05 | 2015-06-03 | 株式会社 ディー・エヌ・エー | Content distribution system, distribution program, and distribution method |
JP6213920B2 (en) | 2013-12-13 | 2017-10-18 | 株式会社コナミデジタルエンタテインメント | GAME SYSTEM, CONTROL METHOD AND COMPUTER PROGRAM USED FOR THE SAME |
JP2015184689A (en) | 2014-03-20 | 2015-10-22 | 株式会社Mugenup | Moving image generation device and program |
US10332311B2 (en) | 2014-09-29 | 2019-06-25 | Amazon Technologies, Inc. | Virtual world generation engine |
US9473810B2 (en) * | 2015-03-02 | 2016-10-18 | Calay Venture S.á r.l. | System and method for enhancing live performances with digital content |
US9939887B2 (en) * | 2015-03-09 | 2018-04-10 | Ventana 3D, Llc | Avatar control system |
WO2016154149A1 (en) * | 2015-03-20 | 2016-09-29 | Twitter, Inc. | Live video stream sharing |
CN106034068A (en) | 2015-03-20 | 2016-10-19 | 阿里巴巴集团控股有限公司 | Method and device for private chat in group chat, client-side, server and system |
JP2017022555A (en) | 2015-07-10 | 2017-01-26 | 日本電気株式会社 | Relay broadcast system and control method thereof |
US10324522B2 (en) * | 2015-11-25 | 2019-06-18 | Jakob Balslev | Methods and systems of a motion-capture body suit with wearable body-position sensors |
WO2017159383A1 (en) | 2016-03-16 | 2017-09-21 | ソニー株式会社 | Information processing device, information processing method, program, and moving-image delivery system |
JP2016174941A (en) | 2016-05-23 | 2016-10-06 | 株式会社コナミデジタルエンタテインメント | Game system, control method used for the same, and computer program |
CA3027735A1 (en) | 2016-06-17 | 2017-12-21 | Walmart Apollo, Llc | Vector-based characterizations of products and individuals with respect to processing returns |
WO2017223210A1 (en) * | 2016-06-22 | 2017-12-28 | Proletariat, Inc. | Systems, methods and computer readable media for a viewer controller |
US10304244B2 (en) | 2016-07-08 | 2019-05-28 | Microsoft Technology Licensing, Llc | Motion capture and character synthesis |
JP6546886B2 (en) | 2016-09-01 | 2019-07-17 | 株式会社 ディー・エヌ・エー | System, method, and program for distributing digital content |
US10356340B2 (en) * | 2016-09-02 | 2019-07-16 | Recruit Media, Inc. | Video rendering with teleprompter overlay |
US10109073B2 (en) | 2016-09-21 | 2018-10-23 | Verizon Patent And Licensing Inc. | Feature tracking and dynamic feature addition in an augmented reality environment |
CN106550278B (en) | 2016-11-11 | 2020-03-27 | 广州华多网络科技有限公司 | Method and device for grouping interaction of live broadcast platform |
US10498794B1 (en) * | 2016-11-30 | 2019-12-03 | Caffeine, Inc. | Social entertainment platform |
US10645460B2 (en) * | 2016-12-30 | 2020-05-05 | Facebook, Inc. | Real-time script for live broadcast |
WO2018142494A1 (en) | 2017-01-31 | 2018-08-09 | 株式会社 ニコン | Display control system and display control method |
JP6178941B1 (en) | 2017-03-21 | 2017-08-09 | 株式会社ドワンゴ | Reaction selection device, reaction selection method, reaction selection program |
US10967255B2 (en) | 2017-05-26 | 2021-04-06 | Brandon Rosado | Virtual reality system for facilitating participation in events |
CN107248195A (en) * | 2017-05-31 | 2017-10-13 | 珠海金山网络游戏科技有限公司 | A kind of main broadcaster methods, devices and systems of augmented reality |
CN107493515B (en) * | 2017-08-30 | 2021-01-01 | 香港乐蜜有限公司 | Event reminding method and device based on live broadcast |
US20190102929A1 (en) * | 2017-10-03 | 2019-04-04 | StarChat Inc. | Methods and systems for mediating multimodule animation events |
KR102661019B1 (en) | 2018-02-23 | 2024-04-26 | 삼성전자주식회사 | Electronic device providing image including 3d avatar in which motion of face is reflected by using 3d avatar corresponding to face and method for operating thefeof |
JP6397595B1 (en) | 2018-04-12 | 2018-09-26 | 株式会社ドワンゴ | Content distribution server, content distribution system, content distribution method and program |
JP6382468B1 (en) | 2018-05-08 | 2018-08-29 | グリー株式会社 | Movie distribution system, movie distribution method, and movie distribution program for distributing movie including animation of character object generated based on movement of actor |
-
2019
- 2019-05-09 US US16/407,733 patent/US11128932B2/en active Active
-
2021
- 2021-08-18 US US17/405,599 patent/US11778283B2/en active Active
-
2023
- 2023-08-28 US US18/457,056 patent/US20230412897A1/en active Pending
Also Published As
Publication number | Publication date |
---|---|
US11778283B2 (en) | 2023-10-03 |
US20210385555A1 (en) | 2021-12-09 |
US11128932B2 (en) | 2021-09-21 |
US20190349648A1 (en) | 2019-11-14 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
US12015818B2 (en) | Video distribution system for live distributing video containing animation of character object generated based on motion of distributor user, video distribution method, and storage medium storing thereon video distribution program | |
JP6382468B1 (en) | Movie distribution system, movie distribution method, and movie distribution program for distributing movie including animation of character object generated based on movement of actor | |
US20210368228A1 (en) | Video distribution system, video distribution method, and storage medium storing video distribution program for distributing video containing animation of character object generated based on motion of actor | |
JP6420930B1 (en) | Movie distribution system, movie distribution method, and movie distribution program for distributing movie including animation of character object generated based on movement of actor | |
JP6431233B1 (en) | Video distribution system that distributes video including messages from viewing users | |
US11778283B2 (en) | Video distribution system for live distributing video containing animation of character object generated based on motion of actors | |
US12137274B2 (en) | Video distribution system distributing video that includes message from viewing user | |
JP6446154B1 (en) | Video distribution system for live distribution of animation including animation of character objects generated based on actor movement | |
JP7460059B2 (en) | A video distribution system for live streaming videos including animations of character objects generated based on the movements of actors | |
JP6548802B1 (en) | A video distribution system that delivers live video including animation of character objects generated based on the movement of actors | |
JP6847138B2 (en) | A video distribution system, video distribution method, and video distribution program that distributes videos containing animations of character objects generated based on the movements of actors. | |
JP6498832B1 (en) | Video distribution system that distributes video including messages from viewing users | |
JP6592214B1 (en) | Video distribution system that distributes video including messages from viewing users | |
JP6431242B1 (en) | Video distribution system that distributes video including messages from viewing users | |
JP6764442B2 (en) | Video distribution system, video distribution method, and video distribution program that distributes videos including animations of character objects generated based on the movements of actors. | |
JP2019198057A (en) | Moving image distribution system, moving image distribution method and moving image distribution program distributing moving image including animation of character object generated based on actor movement |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
AS | Assignment |
Owner name: GREE, INC., JAPAN Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNORS:WATANABE, MASASHI;KURITA, YASUNORI;REEL/FRAME:064725/0782 Effective date: 20190311 |
|
STPP | Information on status: patent application and granting procedure in general |
Free format text: DOCKETED NEW CASE - READY FOR EXAMINATION |