WO2018018076A1 - Creating videos with facial expressions - Google Patents
Creating videos with facial expressions Download PDFInfo
- Publication number
- WO2018018076A1 WO2018018076A1 PCT/AU2017/050763 AU2017050763W WO2018018076A1 WO 2018018076 A1 WO2018018076 A1 WO 2018018076A1 AU 2017050763 W AU2017050763 W AU 2017050763W WO 2018018076 A1 WO2018018076 A1 WO 2018018076A1
- Authority
- WO
- WIPO (PCT)
- Prior art keywords
- facial
- user
- character
- facial feature
- feature
- Prior art date
Links
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T13/00—Animation
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/40—Extraction of image or video features
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V40/00—Recognition of biometric, human-related or animal-related patterns in image or video data
- G06V40/10—Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
- G06V40/16—Human faces, e.g. facial parts, sketches or expressions
- G06V40/168—Feature extraction; Face representation
- G06V40/171—Local features and components; Facial parts ; Occluding parts, e.g. glasses; Geometrical relationships
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V40/00—Recognition of biometric, human-related or animal-related patterns in image or video data
- G06V40/10—Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
- G06V40/16—Human faces, e.g. facial parts, sketches or expressions
- G06V40/174—Facial expression recognition
- G06V40/175—Static expression
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2200/00—Indexing scheme for image data processing or generation, in general
- G06T2200/24—Indexing scheme for image data processing or generation, in general involving graphical user interfaces [GUIs]
Definitions
- the present disclosure generally relates to creating videos.
- the present disclosure includes computer-implemented methods, software, and computer systems for creating videos with facial expressions to reflect styles of individual persons.
- a video document is often used to present content in relation to a "story".
- the content typically consists of audio and/or visual content, or both visual and audio content, for example, the video documents available at Youtube.
- the content presented in the video document often involves at least one character and a storyline associated with the character.
- the storyline is used to represent how the story develops with respect to the character over time, including what the character does and the interactions of the character with other characters in the story.
- a method for creating a video on a mobile device that comprises a camera, the method comprising: creating a graphic user interface on the mobile device to capture by the camera multiple photographic facial images of a user for respective multiple facial expressions of a character in the video;
- a method for creating a video including a character on a mobile device that comprises a camera comprising:
- the first frame of the video is created by modifying the reference facial image of the character with reference to the corresponding user facial feature. Therefore, the original character's visual style is not replaced by a given user's visual style. Instead, the facial expression of the user is used to influence the facial expression of the character.
- This method enables replacement of certain visual style elements with a given user's own style elements. Although this method is described with reference to facial expressions, the method is also applicable to skin tone, eye colour, etc.
- the method may further comprise:
- the user facial feature may comprise a set of control points.
- the graphic user interface may comprise the reference facial image of the character.
- the graphic user interface may comprise a live view of each of the multiple photographic facial images.
- the live view may be positioned next to the camera.
- the live view may be positioned next to the reference facial image of the character.
- the method may further comprise superimposing the live view on the reference facial image of the character.
- the method may further comprise selecting the character from a plurality of characters in the video.
- the method may further comprise recording audio data associated with the user facial feature.
- a computer software product including machine-readable instructions, when executed by a processor of a mobile device, causes the processor to perform any one of the methods described above.
- a mobile device for creating a video including a character comprising:
- processor configured to:
- (c) store associated with a respective facial expression identifier the user facial feature from each of the multiple photographic facial images;
- this method determines the estimated reference facial feature of the character and the estimated user facial feature of the user, and determines the transformation based on the estimated reference facial feature of the character and the estimated user facial feature of the user. This dramatically reduces the time required to create the output frame.
- Determining the estimated reference facial feature of the character may comprise: determining a first distance between a first reference facial feature of the first reference facial image of the character and a second reference facial feature of the second reference facial image of the character; and
- Determining the estimated reference facial feature of the character may comprise performing an interpolation operation based on the first reference facial feature and the second reference facial feature with respect to the first distance.
- Determining the estimated reference facial feature of the character may comprise performing an extrapolation operation based on the first reference facial feature and the second reference facial feature with respect to the first distance.
- the first reference facial feature may include a first set of control points
- the second reference facial feature may include a second set of control points
- the first distance may be indicative of a distance between the first set of control points and the second set of control points.
- Determining the estimated user facial feature of the user may comprise: determining a second distance between a user first facial feature of the first photographic facial image of the user and a user second facial feature of the second photographic facial image of the user;
- Determining the estimated user facial feature of the user may comprise performing an interpolation operation based on the user first facial feature and the user second facial feature with respect to the second distance.
- Determining the estimated user facial feature of the user may comprise performing an extrapolation operation based on the user first facial feature and the user second facial feature with respect to the second distance.
- the user first facial feature may include a third set of control points
- the user second facial feature may include a fourth set of control points
- the second distance may be indicative of a distance between the third set of control points and the fourth set of control points.
- Modifying the third reference facial image of the character may comprise transforming a first spline curve represented by the estimated reference facial feature of the character into an approximation or representation of a second spline curve represented by the estimated user facial feature of the user.
- a computer software product including machine-readable instructions, when executed by a processor of a mobile device, causes the processor to perform any one of the methods described above.
- a mobile device for creating an output frame for a character in a video comprising:
- a camera to capture a first photographic facial image and a second photographic facial image of the user
- Fig. 1 illustrates an example mobile device for creating a video including a character in accordance with the present disclosure
- Figs. 2(a) and 2(b) illustrate example methods for creating a video including a character on the mobile device in accordance with the present disclosure
- Fig. 3 illustrates a graphic user interface in accordance with the present disclosure
- Figs. 4 and 5 illustrate facial features in accordance with the present disclosure
- Fig. 6 illustrates a detailed process for creating a video including a character on the mobile device in accordance with the present disclosure
- Fig. 7 illustrate an example mobile device for creating an output frame for a character in a video in accordance with the present disclosure
- Fig. 8 illustrates an example method for creating an output frame for a character in a video in accordance with the present disclosure.
- a video in the present disclosure consists of a sequence of images, i.e., "frames". Each frame differs in content from its adjacent frames (i.e., previous and next frames) by a small amount in terms of appearance.
- a high rate e.g. 30 frames per second
- a viewer of the sequence is given the impression of viewing a "movie clip”.
- a frame of the video includes at least two "layers" of visual content.
- One or more layers represent the non-replaceable content.
- One or more layers represent replaceable characters.
- a replaceable character may be replaced with user-supplied content according to the method(s) as described in the present disclosure. All layers are composited together in order to produce a processed frame, or an output frame, associated with the frame.
- the video may also include one or more audio tracks.
- the replaceable character audio content occupies one single audio track. Additional audio tracks are used to store audio content for each replaceable character.
- This per-character content is then further subdivided into individual elements, each representing a "sound bite" (e.g. a short voiceover speech element, or a noise element) for that character in that specific story.
- an original video document contains only original, or "reference”, material.
- the replaceable reference content consists of some or all of the graphical elements for each replaceable character, saved on a frame-by-frame basis. At a minimum, this content consists of the replaceable character' s head or face as it appears in each frame of the reference video content.
- Replaceable reference content may also include elements such as hands, feet, etc. where it may be desirable to offer the users a selectable set of display options (e.g. skin colour).
- the non-replaceable visual content may consist of graphical assets, arranged as sets of assets on a per-frame basis in an animation sequence that are normally used to generate video content, but with all replaceable content removed.
- This form of non-replaceable visual content is packaged as a number of asset layers per frame which, when combined with the associated per-frame replaceable content, forms a complete sequence of video frames.
- the non-replaceable content may alternatively consist of standard video content, with replaceable reference content masked (or removed) from each video frame.
- replaceable character audio content are extracted from the original video content.
- the video is deconstructed on a frame-by-frame basis, either in real time or as a separate pre-processing stage where the frames are stored in a database. In either case, the deconstructed video frames are then subsequently combined with the associated per-frame replaceable content, forming a complete sequence of video frames.
- a user provides material for all replaceable content (i.e., audio and visual) for a given story.
- audio material the user typically provides their own “sound bite” (voiceover, etc.) for each element in a replaceable character's audio track.
- visual material the user produces a facial expression identified by a facial expression identifier or mimics the original replaceable character's video sequence, particularly, a facial expression of the original replaceable character in a key frame at a time instant.
- the feature of the facial expression of the user is extracted from the user photographic image captured by the camera 101 of the mobile device 100.
- the feature of facial expression of the character in the key frame is also extracted.
- the mathematical difference between the character's feature and the user's features is then used to modify the original character's facial appearance in order to better resemble the user's facial appearance.
- This resemblance includes, but is not limited to, the position and shape of: eyes, eyebrows, nose, mouth, and facial outline / jawline, as described with reference Figs. 2(a) and (b), Figs. 3 to 6.
- the user produces distinctive or representative facial expressions identified by facial expression identifiers or mimics distinctive or representative facial expressions in different key frames at different time instants.
- the features of the facial expression of both the user and the original replaceable character at the different time instants are extracted.
- the method(s) described in the present disclosure then dynamically creates a facial image of the character by using an algorithm for example, interpolation and/or extrapolation based on these facial expression features, as described with reference to Figs. 7 and 8.
- Fig. 1 illustrates an example mobile device 100 for creating a video including a character in accordance with the present disclosure.
- the mobile device includes a camera 101, a display 103, and a processor 105.
- the camera 101, the display 103 and the processor 105 are connected to each other via a bus 107.
- the mobile device 100 may also include a microphone 109, and a memory device 111.
- the camera 101 is an optical device that captures photographic images of the user of the mobile device 100.
- the photographic images captured by the camera 101 are transmitted from the camera 101 to the processor 105 for further processing, or to the memory device 111 for storage.
- the display 103 in this example is a screen to present visual content to the user under control of the processor 105.
- the display 103 displays images to the user of the mobile device 100.
- the images can be those captured by the camera 101, or processed by the processor 105, or retrieved from the memory device 111.
- the display 103 is able to present a graphic user interface to the user, as shown in Fig. 1.
- the graphic user interface includes one or more "pages".
- Each of the pages includes one or more graphic user interface elements, for example, buttons, menus, drop-down list, text boxes, picture boxes, etc. to present visual content to the user or to receive commands from the user, as shown in Fig. 1, which represents one of the pages included in the graphic user interface.
- the display 103 can also be a screen with a touch-sensitive device (not shown in Fig. 1).
- a virtual keyboard is displayed on the display 103, and the display 103 is able to receive commands through the touch-sensitive device when the user touches the virtual keys of the virtual keyboard, as shown in Fig. 3(c).
- the memory device 111 is a computer-readable medium that stores a computer software product.
- the memory device 111 can be part of the processor 105, for example, a Random Access Memory (RAM) device, a Read Only Memory (ROM) device, a FLASH memory device, which is integrated with the processor 105.
- RAM Random Access Memory
- ROM Read Only Memory
- FLASH memory device which is integrated with the processor 105.
- the memory device 111 can also be a device separate from the processor, for example, a floppy disk, a hard disk, an optical disk, a USB stick.
- the memory device 111 can be directly connected to the bus 107 by inserting the memory device 111 into an appropriate interface provided by the bus 107.
- the memory device 111 is located remotely and connected to the bus 107 through a communication network (not shown in Fig. 1).
- the computer software product stored in the memory device 111 is downloaded, through the communication network, to the processor 105 for execution.
- the computer software product includes machine-readable instructions.
- the processor 105 of the mobile device 100 loads the computer software product from the memory device 111 and reads the machine-readable instructions included in the computer software product. When these machine-readable instructions are executed by the processor 105, these instructions cause the processor 105 to perform one or more method steps described below.
- Fig. 2(a) illustrates an example method 200 for creating a video including a character on the mobile device 100.
- the method 200 is performed by the processor 105 of the mobile device 100.
- the processor 105 is configured to
- Fig. 2(b) illustrates another example method 210 for creating a video including a character on the mobile device 100.
- the method 210 is performed by the processor 105 of the mobile device 100.
- the processor 105 is configured to
- the processor 105 is also configured to present, on the display 103, the first frame of the video in the graphic user interface.
- the processor 105 repeats steps (d) to (g) to create the second frame of the video.
- the first frame of the video is created by modifying the reference facial image of the character with reference to the corresponding user facial feature. Therefore, the original character's visual style is not replaced by a given user's visual style. Instead, the facial expression of the user is used to influence the facial expression of the character.
- This method enables replacement of certain visual style elements with a given user's own style elements. Although this method is described with reference to facial expressions, the method is also applicable to skin tone, eye colour, etc.
- the content generated by the method(s) described in the present disclosure is significantly personalised for each user, and it is constructed "on demand" in real time from sets of associated asset elements.
- the resulting content (a sequence of frames) can then be immediately displayed on a device.
- the generated content may be used to produce a final multimedia asset such as a static, viewable Youtube asset.
- Fig. 3 illustrates the graphic user interface in accordance with the present disclosure.
- the processor 103 creates 211 a graphic user interface on the mobile device 100 to capture by the camera 101 multiple photographic facial images of a user for respective multiple facial expressions.
- the graphic user interface starts with page (a) as shown in Fig. 3, which presents on the display 103 a movie library consisting of one or more movies. As shown in page (a), there are multiple movies available for the user to choose to work on, for example, "Kong Fu Panda", “Fast Friends", “Frozen”, etc. The user chooses "Fast Friends", and the graphic user interface proceeds to page (b).
- Page (b) shows a plurality of characters in this movie, for example, a boy, a turtle, and a worm. The user can select one of the characters by touching the character. The user can also select one of the characters by entering the name of the character through a virtual keyboard presented in the graphic user interface, as shown in page (c). In the example shown in page (c), the character of the boy is selected by the user. Upon selection of the character, the graphic user interface proceeds to page (d).
- Page (d) shows a list of facial expression identifiers to identify facial expressions.
- the facial expression identifiers serve the purpose of guiding the user to produce facial expressions identified by the facial expression identifiers.
- a facial expression identifier can be a text string indicative of the name of a facial expression, for example, "Smile”, “Frown”, “Gaze”, “Surprise”, and “Grave”, as shown in page (d).
- the facial expression identifier can include an icon, for example, the icon for facial expression "Gaze”.
- the facial expression identifier can also include a reference facial image of the character extracted from movie, for example, the facial image of the character in a frame of the movie where the character is "surprised", which makes it easier for the user to produce the corresponding facial expression.
- the facial expression identifier can take other forms without departing from the scope of the present disclosure.
- the user is producing a facial expression identified by a text string "Surprise” with a reference facial image of the character.
- the user recognises the facial expression identifier and produces the corresponding facial expression.
- the facial image of the user is captured by the camera 101 and presented in a live view of the graphic user interface.
- the live view of the user's facial image is positioned next to the camera 101 to alleviate the issue where the user does not appear to look at the camera 101 when the user is looking at the live view.
- the processor 105 also displays the reference facial image of the character in a character view of the graphic user interface.
- the live view is also positioned next to the reference facial image of the character to make it easier for the user to compare the facial expression of the user and the facial expression of the character.
- the processor 105 further superimposes the live view of the facial image of the user on the reference facial image of the character to make it even easier for the user to compare the facial expression of the user and the facial expression of the character.
- the user or another person clicks on the shutter button of the graphic user interface to capture the photographic facial image of the user.
- the photographic facial image of the user can be displayed in a picture box associated with the facial expression identifier.
- the photographic facial images of the user for facial expressions "Smile” and "Frown” are displayed in respective picture boxes, as shown in page (d).
- the processor 105 may retrieve photographic facial images of the user that have been stored in the memory device 111 and associate the photographic facial images with the corresponding facial expression identifiers.
- the photographic facial image of the user is transmitted from the camera 101 to the processor 105.
- the processor 105 extracts 213 a user facial feature "U4" from the
- the processor 105 stores 215 in a user feature table associated with the facial expression identifier "Surprise” the user facial feature "U4", as shown in the fourth entry of the user feature table below.
- the processor 105 records, through the microphone 109, audio data "S4" associated with the user facial feature "U4".
- the processor 105 further stores the audio data "S4" in the user feature table in association with the facial expression identifier "Surprise", as shown in the fourth entry of the user feature table below.
- the processor 105 repeats the above steps for each expression identifier in page (d), and populates the user feature table for the character of the boy, which associates the facial expression identifiers with the corresponding user facial features and audio data. For other characters in the movie, the processor 105 can similarly generates respective user feature table for those characters.
- Figs. 4 and 5 illustrate facial features in accordance with the present disclosure.
- Facial features in the present disclosure include a set of control points.
- Fig. 4(a) represents a facial image of an object, which is captured by a camera.
- the object in the present disclosure can be a user or a character in a movie.
- the facial image in Fig. 4(a) shows the object to be generally front-facing such that all key areas of the face are visible: [both] eyes, [both] eyebrows, nose, mouth, and jawline. Ideally, these areas should be largely unobstructed.
- the dots in Fig. 4(b) represents a set of control points extracted by the processor 105.
- a third party software library is used to extract the set of points from the facial image shown in Fig. 4(a).
- the set of control points that are extracted from the facial image may comply with an industry standard, for example, MPEG-4, ISO/IEC 14496-1, 14496-2, etc.
- the control points shown in Fig. 5 comply with the MPEG-4 standard.
- the facial shape of the object may be reconstructed by connecting those controls with segments.
- Fig. 6 illustrates a detailed process 400 for creating a video including a character on the mobile device 100 in accordance with the present disclosure.
- a storyline is shown in Fig. 6 to indicate a sequence of facial expression identifiers of the character of the boy over time.
- facial expression identifiers there are five facial expression identifiers labelled along the storyline at five time instants "A” to "E", which are “Smile”, “Gaze”, “Frown”, “Grave”, and “Smile”.
- These facial expression identifiers indicate the facial expressions of the character in the frames at the five time instants.
- the processor 105 also extracts frames at the five time instants from the video document of the movie "Fast Friends".
- the frame at time instant "A" contains a facial image of the character that corresponds to the facial expression identified by the facial expression identifier "Smile”.
- the facial image of the character at time instant "A” is also shown in Fig. 6 for description purposes.
- the processor 105 extracts a facial expression feature "Rl" from the facial image of the character as a reference facial feature associated with the facial expression identifier "Smile”.
- the facial image of the character at time instant "A” is used as a reference facial image associated with the facial expression identifier "Smile” and the reference facial feature "Rl”.
- the processor 105 selects 217 one of the multiple user facial features based on the facial expression identifier "Smile" associated with the frame at time instant "A" in the video.
- the processor 105 selects a user facial feature "Ul” since the user facial feature "Ul” is associated with the facial expression identifier "Smile” in the user feature table.
- the processor 105 may further select audio data "S I" associated with the facial expression identifier "Smile”.
- the processor 105 determines 219 a transformation that transforms the reference facial feature "Rl" associated with the facial expression identifier "Smile" into an
- the transformation can be a transformation matrix that transforms the control points of the reference facial feature "Rl” into an approximation or representation of the control points of the selected user facial feature "Ul”.
- the processor 105 modifies 221, based on the transformation, the reference facial image associated with the facial expression identifier "Smile” and the reference facial feature "Rl". Particularly, the processor 105 may modify the reference facial image by changing the positions of pixels in the reference facial image based on the transformation. The processor 105 then creates 223 the frame at time instant "A" of the video based on the modified reference facial image by for example combining the modified reference facial image and the selected audio data "S I" associated with the facial expression identifier "Smile”.
- the user-recorded audio data may be associated with a facial expression
- the audio data may equally be independent from the facial expressions but otherwise associated with the story line.
- the user may record audio data for what the character says in a particular scene where no facial expression identifier is associated with frames in that scene.
- the proposed methods and systems may perform only the disclosed face modification techniques or only the audio voice-over techniques or both.
- the processor 105 repeats the above process for each of the characters contained in the frame at time instant "A" and/or each of the frames at the five time instants "A" to "E” along the storyline.
- the frames at those time instants in the video contain personal expression features of the user, and thus the video becomes more personalised and user- friendly when played, as shown on page (f) of the graphic use interface shown in Fig. 3. It can be seen from page (f) that the shape of the face of the character is more like the user's actual face than the original character's face is.
- Fig. 7 illustrates an example mobile device 700 for creating an output frame for a character in a video in accordance with the present disclosure.
- the mobile device 700 includes a camera 701, a display 703, and a processor 705.
- the camera 701, the display 703 and the processor 705 are connected to each other via a bus 707.
- the mobile device 700 may also include a microphone 709, and a memory device 711.
- the camera 701 is an optical device that captures photographic images of the user of the mobile device 700.
- the photographic images captured by the camera 701 are transmitted from the camera 701 to the processor 705 for further processing, or to the memory device 711 for storage.
- the display 703 in this example is a screen to present visual content to the user under control of the processor 705.
- the display 703 displays images to the user of the mobile device 700.
- the images can be those captured by the camera 701, or processed by the processor 705, or retrieved from the memory device 711.
- the display 703 is able to present a graphic user interface to the user, as shown in Fig. 7.
- the memory device 711 is a computer-readable medium that stores a computer software product.
- the memory device 711 can be part of the processor 705, for example, a Random Access Memory (RAM) device, a Read Only Memory (ROM) device, a FLASH memory device, which is integrated with the processor 105.
- RAM Random Access Memory
- ROM Read Only Memory
- FLASH memory device which is integrated with the processor 105.
- the memory device 711 can also be a device separate from the processor, for example, a floppy disk, a hard disk, an optical disk, a USB stick.
- the memory device 711 can be directly connected to the bus 707 by inserting the memory device 711 into an appropriate interface provided by the bus 707.
- the memory device 711 is located remotely and connected to the bus 707 through a communication network (not shown in Fig. 7).
- the computer software product stored in the memory device 711 is downloaded, through the communication network, to the processor 705 for execution.
- the computer software product includes machine-readable instructions.
- the processor 705 of the mobile device 700 loads the computer software product from the memory device 711 and reads the machine-readable instructions included in the computer software product. When these machine-readable instructions are executed by the processor 705, these instructions cause the processor 705 to perform one or more method steps described below.
- Fig. 8 illustrates an example method 800 for creating an output frame for a character in a video in accordance with the present disclosure.
- the method 800 is used to create an output frame based on a first reference facial image and a second reference facial image of the character.
- the first reference facial image of the character is in a first key frame of the video
- the second reference facial image of the character is in a second key frame of the video.
- the output frame can be a frame between the first key frame and the second key frame along the storyline, or outside the first key frame and the second key frame along the storyline.
- the method 800 is performed by the processor 705 of the mobile device 700.
- the camera 701 of the mobile device 700 captures a first photographic facial image and a second photographic facial image of the user, and the processor 705 is configured to determine 810 an estimated reference facial feature of the character based on the first reference facial image and the second reference facial image of the character;
- the processor 705 is further configured to present the output frame on the display 103.
- the method 800 determines the estimated reference facial feature of the character and the estimated user facial feature of the user, and determines the transformation based on the estimated reference facial feature of the character and the estimated user facial feature of the user. This dramatically reduces the time required to create the output frame. A detailed process for creating the output frame is described below.
- two time instants "A", “B” along the storyline are selected by the user or the director as the facial expressions of the character at these time instants are distinctive or representative.
- the facial expressions of the character at the time instants "A", “B” are identified as “Surprise” and “Grave”, respectively.
- a facial image of the character is extracted from the first key frame at time instant "A”, and is referred to as a first reference facial image.
- a facial image of the character is extracted from the second key frame at time instant "B”, and is referred to as a second reference facial image. Both reference facial images of the character are shown in the graphic user interface for the user's reference.
- the processor 705 determines 810 an estimated reference facial feature of the character based on the first reference facial image and the second reference facial image of the character. Particularly, the processor 705 extracts a reference facial feature of the character from the first reference facial image of the character, referred to as a first reference facial feature. The processor 705 also extracts a reference facial feature of the character from the second reference facial image of the character, referred to as a second reference facial feature.
- the processor 705 further determines a first distance between the first reference facial feature of the first reference facial image and the second reference facial feature of the second reference facial image.
- the processor 705 determines the estimated reference facial feature of the character based on the first distance, the first reference facial feature and the second reference facial feature.
- the first reference facial feature includes a first set of control points
- the second reference facial feature includes a second set of control points.
- the first distance is indicative of a distance between the first set of control points and the second set of control points.
- the processor 705 determines the estimated reference facial feature of the character by performing an interpolation operation based on the first reference facial feature and the second reference facial feature with respect to the first distance.
- the processor 705 determines the estimated reference facial feature of the character by performing an extrapolation operation based on the first reference facial feature and the second reference facial feature with respect to the first distance.
- the user recognises the first facial expression identifier "Surprise” and/or observes the first reference facial image of the character (i.e., the facial image of the character at time instant "A"), and produces a facial expression that corresponds to the first facial expression identifier "Surprise”. If the user or the director is satisfied with the facial expression of the user, a facial image of the user is captured by the camera 701, referred to as a first
- the user recognises the second facial expression identifier "Grave” and/or observes the second reference facial image of the character (i.e., the facial image of the character at time instant "B"), and produces a facial expression that corresponds to the second facial expression identifier "Grave”. If the user or the director is satisfied with the facial expression of the user, a facial image of the user is captured by the camera 701, referred to as a second photographic facial image.
- the processor 705 may retrieve photographic facial images of the user that have been stored in the memory device 711 and associate the photographic facial images with the corresponding facial expression identifiers.
- Both the first photographic facial image and the second photographic facial image of the user are transmitted from the camera 701 to the processor 705.
- the processor 705 determines 820 an estimated user facial feature of a user based on the first photographic facial image and the second photographic facial image of the user. Particularly, the processor 705 extracts a facial feature from the first photographic facial image of the user, referred to as a user first facial feature. The processor 705 also extracts a facial feature from the second photographic facial image of the user, referred to as a user second facial feature.
- the processor 705 further determines a second distance between the user first facial feature and the user second facial feature.
- the processor 705 determines the estimated user facial feature of the user based on the second distance, the user first facial feature and the user second facial feature.
- the user first facial feature includes a third set of control points
- the user second facial feature includes a fourth set of control points.
- the second distance is indicative of a distance between the third set of control points and the fourth set of control points.
- the processor 705 determines the estimated user facial feature of the user by performing an interpolation operation based on the user first facial feature and the user second facial feature with respect to the second distance.
- the processor 705 determines the estimated user facial feature of the user by performing an extrapolation operation based on the user first facial feature and the user second facial feature with respect to the second distance.
- Fig. 9 illustrates the interpolation process 900 in more detail.
- the storyline 901 is annotated with facial expression identifiers and Fig. 9 also shows the corresponding control points of the facial features.
- the y-axis 902 indicates the y-position of the central control point 903 of the lips.
- the storyline evolves from a smile 911 to a frown 912 back to a smile 913 and finally into a frown 914 again.
- the control point 903 starts from a low position 921 into a high position 922, back to a low position 923 and finally into a high position 924.
- processor 705 may interpolate the y-position of control point 903 using a linear interpolation method. In some examples, however, this may lead to an unnatural appearance at the actual transition points, such as a sharp corner at point 922. Therefore, processor 704 may generate a spline interpolation 904 using the y-coordinates of the points 921, 922, 923 and 924 as knots. This results in a smooth transition between the facial expressions. While control point 903 moves only in y-direction in this example, control points are generally allowed to move in both dimensions. Therefore, the spline curve 904 may be a two-dimensional spline approximation of the knots to allow the processor 705 to interpolate both x- and y-coordinates.
- the processor 705 determines 830 a transformation that transforms the estimated reference facial feature of the character into an approximation or representation of the estimated user facial feature of the user.
- the transformation can be a transformation matrix that transforms the control points of the estimated reference facial feature of the character into an approximation or representation of the control points of the estimated user facial feature of the user.
- the processor 705 determines a further reference facial image of the character by performing an interpolation operation based on the first reference facial image and the second reference facial image of the character, referred to as a third reference facial image.
- the third reference facial image is associated with the estimated reference facial feature of the character.
- the processor 705 determines the third reference facial image of the character by performing an extrapolation operation based on the first reference facial image and the second facial image.
- the processor 705 modifies 840, based on the transformation, the third reference facial image of the character by for example changing the positions of pixels in the third reference facial image. Since the estimated reference facial feature of the character may represent a spline curve, referred to as a first spline curve, and the estimated user facial feature of the user may represent another spline curve, referred to as a second spline curve, modifying the third reference facial image of the character also results in transforming the first spline curve into an approximation or representation of the second spline curve.
- the processor 705 repeats the above steps for each of the characters in the first key frame and the second key frame, and creates 850 the output frame for the characters in the video based on the modified third reference facial images for those characters. For example, the processor 705 may create the output frame by combining the modified third reference facial images into the output frame.
- processor 750 may apply a perspective
- processor 750 applies the transformation on 2D coordinates of control points to create the impression of a 3D rotation.
- Fig. 10(a) shows a transformation of the 2D coordinates of the control points to create the impression of a 3D rotation of the character's face. The degree of rotation may be known from the storyline and therefore, processor 750 calculates a transformation that creates the corresponding impression. This transformation may also be integrated into the previous transformation applied to the reference image. Processor 750 may also create the impression of perspective by down-scaling points that are further away from the virtual camera.
- Fig. 10b illustrates a simplified 3D model of a character's head.
- This 3D model may be created by a designer or developer once for each character.
- processor 750 can calculate which control points are not visible because they are occluded by other parts of the head. In the example of Fig. 10(b), the right eye is occluded and not visible. Applying this calculation to the output image to hide the parts of the images that are not visible according to the 3D model, increases the realistic impression of the created video.
- the calculation may be based on an assumed pivot point, that may be the top of the neck.
- the processor 750 can then perform the transformation based on rotation and tilt around the pivot point.
- Suitable computer readable media may include volatile (e.g. RAM) and/or non-volatile (e.g. ROM, disk) memory, carrier waves and transmission media.
- Exemplary carrier waves may take the form of electrical, electromagnetic or optical signals conveying digital data steams along a local network or a publically accessible network such as internet.
- authentication refers to the action and processes of a computer system, or similar electronic computing device, that processes and transforms data represented as physical (electronic) quantities within the computer system's registers and memories into other data similarly represented as physical quantities within the computer system memories or registers or other such information storage, transmission or display devices.
Landscapes
- Engineering & Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Theoretical Computer Science (AREA)
- Health & Medical Sciences (AREA)
- Oral & Maxillofacial Surgery (AREA)
- General Health & Medical Sciences (AREA)
- Multimedia (AREA)
- Human Computer Interaction (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Processing Or Creating Images (AREA)
Abstract
The present disclosure relates to creating videos. A mobile device creates a graphic user interface to capture by the camera of the device multiple photographic facial images of a user for respective multiple facial expressions of a character in the video. Using the multiple photographic facial images, the device modifies stored character images by matching facial features of the character to facial features of the user for the multiple facial expressions of the character in the video and creates the video based on the modified character images. The facial expression of the user is used to influence the facial expression of the character. This method enables replacement of certain visual style elements with a given user's own style elements.
Description
"Creating videos with facial expressions"
Cross-Reference to Related Applications
[0001] The present application claims priority from US provisional application 62/366375 filed on 25 July 2016 the content of which is incorporated herein by reference. The present application further claims priority from US provisional application 62/366406 filed on 25 July 2016 the content of which is incorporated herein by reference. The present application further claims priority from Australian provisional application 2016902919 filed on 25 July 2016 the content of which is incorporated herein by reference. The present application further claims priority from Australian provisional application 2016902921 filed on 25 July 2016 the content of which is incorporated herein by reference.
Technical Field
[0002] The present disclosure generally relates to creating videos. The present disclosure includes computer-implemented methods, software, and computer systems for creating videos with facial expressions to reflect styles of individual persons.
Background
[0003] A video document is often used to present content in relation to a "story". The content typically consists of audio and/or visual content, or both visual and audio content, for example, the video documents available at Youtube. The content presented in the video document often involves at least one character and a storyline associated with the character. The storyline is used to represent how the story develops with respect to the character over time, including what the character does and the interactions of the character with other characters in the story.
[0004] Throughout this specification the word "comprise", or variations such as "comprises" or "comprising", will be understood to imply the inclusion of a stated element, integer or step, or group of elements, integers or steps, but not the exclusion of any other element, integer or step, or group of elements, integers or steps.
[0005] Any discussion of documents, acts, materials, devices, articles or the like which has been included in the present disclosure is not to be taken as an admission that any or all of these matters form part of the prior art base or were common general knowledge in the field relevant to the present disclosure as it existed before the priority date of each claim of this application.
Summary
[0006] There is provided a method for creating a video on a mobile device that comprises a camera, the method comprising: creating a graphic user interface on the mobile device to capture by the camera multiple photographic facial images of a user for respective multiple facial expressions of a character in the video;
using the multiple photographic facial images to modify stored character images by matching facial features of the character to facial features of the user for the multiple facial expressions of the character in the video; and
creating the video based on the modified character images.
[0007] There is provided a method for creating a video including a character on a mobile device that comprises a camera, the method comprising:
(a) creating a graphic user interface on the mobile device to capture by the camera multiple photographic facial images of a user for respective multiple facial expressions;
(b) extracting a user facial feature from each of the multiple photographic facial images;
(c) storing associated with a respective facial expression identifier the user facial feature from each of the multiple photographic facial images;
(d) selecting one of the multiple user facial features based on a first facial expression identifier associated with a first frame of the video;
(e) determining a transformation that transforms a reference facial feature associated with the first facial expression identifier into an approximation or representation of the selected one of the multiple user facial features;
(f) modifying, based on the transformation, a reference facial image of the character
associated with the first facial expression identifier and the reference facial feature; and
(g) creating the first frame of the video based on the modified reference facial image.
[0008] As can be seen from the above, the first frame of the video is created by modifying the reference facial image of the character with reference to the corresponding user facial feature. Therefore, the original character's visual style is not replaced by a given user's visual style. Instead, the facial expression of the user is used to influence the facial expression of the character. This method enables replacement of certain visual style elements with a given user's own style elements. Although this method is described with reference to facial expressions, the method is also applicable to skin tone, eye colour, etc.
[0009] The method may further comprise:
for a second facial expression identifier associated with a second frame of the video, repeating steps (d) to (g) to create the second frame of the video.
[0010] The user facial feature may comprise a set of control points.
[0011] The graphic user interface may comprise the reference facial image of the character.
[0012] The graphic user interface may comprise a live view of each of the multiple photographic facial images.
[0013] The live view may be positioned next to the camera.
[0014] The live view may be positioned next to the reference facial image of the character.
[0015] The method may further comprise superimposing the live view on the reference facial image of the character.
[0016] The method may further comprise selecting the character from a plurality of characters in the video.
[0017] The method may further comprise recording audio data associated with the user facial feature.
[0018] There is provided a computer software product, including machine-readable instructions, when executed by a processor of a mobile device, causes the processor to perform any one of the methods described above.
[0019] There is provided a mobile device for creating a video including a character, the mobile device comprising:
a camera;
a display; and
a processor, the processor configured to
(a) create a graphic user interface on the display of the mobile device to capture by the camera multiple photographic facial images of a user for respective multiple facial expressions;
(b) extract a user facial feature from each of the multiple photographic facial images;
(c) store associated with a respective facial expression identifier the user facial feature from each of the multiple photographic facial images;
(d) select one of the multiple user facial features based on a first facial expression identifier associated with a first frame of the video;
(e) determine a transformation that transforms a reference facial feature associated with the first facial expression identifier into an approximation or representation of the selected one of the multiple user facial features;
(f) modify, based on the transformation, a reference facial image of the character associated with the first facial expression identifier and the reference facial feature;
(g) create the first frame of the video based on the modified reference facial image; and
(h) present, on the display, the first frame of the video in the graphic user interface.
[0020] There is provided a method for creating an output frame for a character in a video, the method comprising:
determining an estimated reference facial feature of the character based on a first reference facial image and a second reference facial image of the character;
determining an estimated user facial feature of a user based on a first photographic facial image and a second photographic facial image of the user;
determining a transformation that transforms the estimated reference facial feature of the character into an approximation or representation of the estimated user facial feature of the user;
modifying, based on the transformation, a third reference facial image of the character associated with the estimated reference facial feature of the character; and
creating the output frame for the character in the video based on the modified third reference facial image.
[0021] As can be seen from the above, this method determines the estimated reference facial feature of the character and the estimated user facial feature of the user, and determines the transformation based on the estimated reference facial feature of the character and the estimated user facial feature of the user. This dramatically reduces the time required to create the output frame.
[0022] Determining the estimated reference facial feature of the character may comprise: determining a first distance between a first reference facial feature of the first reference facial image of the character and a second reference facial feature of the second reference facial image of the character; and
determining the estimated reference facial feature of the character based on the first distance, the first reference facial feature and the second reference facial feature.
[0023] Determining the estimated reference facial feature of the character may comprise performing an interpolation operation based on the first reference facial feature and the second reference facial feature with respect to the first distance.
[0024] Determining the estimated reference facial feature of the character may comprise performing an extrapolation operation based on the first reference facial feature and the second reference facial feature with respect to the first distance.
[0025] The first reference facial feature may include a first set of control points, and the second reference facial feature may include a second set of control points, and the first distance may be indicative of a distance between the first set of control points and the second set of control points.
[0026] Determining the estimated user facial feature of the user may comprise: determining a second distance between a user first facial feature of the first photographic facial image of the user and a user second facial feature of the second photographic facial image of the user; and
determining the estimated user facial feature based on the second distance, the user first facial feature and the user second facial feature.
[0027] Determining the estimated user facial feature of the user may comprise performing an interpolation operation based on the user first facial feature and the user second facial feature with respect to the second distance.
[0028] Determining the estimated user facial feature of the user may comprise performing an extrapolation operation based on the user first facial feature and the user second facial feature with respect to the second distance..
[0029] The user first facial feature may include a third set of control points, and the user second facial feature may include a fourth set of control points, and the second distance may be indicative of a distance between the third set of control points and the fourth set of control points.
[0030] Modifying the third reference facial image of the character may comprise transforming a first spline curve represented by the estimated reference facial feature of the character into an approximation or representation of a second spline curve represented by the estimated user facial feature of the user.
[0031] There is provided a computer software product, including machine-readable instructions, when executed by a processor of a mobile device, causes the processor to perform any one of the methods described above.
[0032] There is provided a mobile device for creating an output frame for a character in a video, the mobile device comprising:
a camera to capture a first photographic facial image and a second photographic facial image of the user;
a display; and
a processor, the processor configured to
determine an estimated reference facial feature of the character based on a first reference facial image and a second reference facial image of the character;
determine an estimated user facial feature of the user based on the first photographic facial image and the second photographic facial image of the user;
determine a transformation that transforms the estimated reference facial feature of the character into an approximation or representation of the estimated user facial feature of the user;
modify, based on the transformation, a third reference facial image of the character associated with the estimated reference facial feature of the character;
create the output frame based on the modified third reference facial image; and present the output frame on the display.
Brief Description of Drawings
[0033] Features of the present disclosure are illustrated by way of non-limiting examples, and like numerals indicate like elements, in which:
Fig. 1 illustrates an example mobile device for creating a video including a character in accordance with the present disclosure;
Figs. 2(a) and 2(b) illustrate example methods for creating a video including a character on the mobile device in accordance with the present disclosure;
Fig. 3 illustrates a graphic user interface in accordance with the present disclosure;
Figs. 4 and 5 illustrate facial features in accordance with the present disclosure;
Fig. 6 illustrates a detailed process for creating a video including a character on the mobile device in accordance with the present disclosure;
Fig. 7 illustrate an example mobile device for creating an output frame for a character in a video in accordance with the present disclosure; and
Fig. 8 illustrates an example method for creating an output frame for a character in a video in accordance with the present disclosure.
Description of Embodiments
[0034] A video in the present disclosure consists of a sequence of images, i.e., "frames". Each frame differs in content from its adjacent frames (i.e., previous and next frames) by a
small amount in terms of appearance. By displaying the sequence of frames at a high rate (e.g. 30 frames per second), a viewer of the sequence is given the impression of viewing a "movie clip".
[0035] A frame of the video includes at least two "layers" of visual content. One or more layers represent the non-replaceable content. One or more layers represent replaceable characters. A replaceable character may be replaced with user-supplied content according to the method(s) as described in the present disclosure. All layers are composited together in order to produce a processed frame, or an output frame, associated with the frame.
[0036] In addition to the visual image frame sequence, the video may also include one or more audio tracks. Typically, all but the replaceable character audio content occupies one single audio track. Additional audio tracks are used to store audio content for each replaceable character. This per-character content is then further subdivided into individual elements, each representing a "sound bite" (e.g. a short voiceover speech element, or a noise element) for that character in that specific story.
[0037] In the present disclosure, an original video document contains only original, or "reference", material. This includes replaceable and non-replaceable reference content. The replaceable reference content consists of some or all of the graphical elements for each replaceable character, saved on a frame-by-frame basis. At a minimum, this content consists of the replaceable character' s head or face as it appears in each frame of the reference video content. Replaceable reference content may also include elements such as hands, feet, etc. where it may be desirable to offer the users a selectable set of display options (e.g. skin colour).
[0038] The non-replaceable visual content may consist of graphical assets, arranged as sets of assets on a per-frame basis in an animation sequence that are normally used to generate video content, but with all replaceable content removed. This form of non-replaceable visual content is packaged as a number of asset layers per frame which, when combined with the associated per-frame replaceable content, forms a complete sequence of video frames.
[0039] The non-replaceable content may alternatively consist of standard video content, with replaceable reference content masked (or removed) from each video frame. In this
scenario, replaceable character audio content are extracted from the original video content. In this case, the video is deconstructed on a frame-by-frame basis, either in real time or as a separate pre-processing stage where the frames are stored in a database. In either case, the deconstructed video frames are then subsequently combined with the associated per-frame replaceable content, forming a complete sequence of video frames.
[0040] In the present disclosure, a user provides material for all replaceable content (i.e., audio and visual) for a given story. In the case of audio material, the user typically provides their own "sound bite" (voiceover, etc.) for each element in a replaceable character's audio track. In the case of user-supplied visual material, the user produces a facial expression identified by a facial expression identifier or mimics the original replaceable character's video sequence, particularly, a facial expression of the original replaceable character in a key frame at a time instant. The feature of the facial expression of the user is extracted from the user photographic image captured by the camera 101 of the mobile device 100. The feature of facial expression of the character in the key frame is also extracted. The mathematical difference between the character's feature and the user's features is then used to modify the original character's facial appearance in order to better resemble the user's facial appearance. This resemblance includes, but is not limited to, the position and shape of: eyes, eyebrows, nose, mouth, and facial outline / jawline, as described with reference Figs. 2(a) and (b), Figs. 3 to 6.
[0041] In another example, the user produces distinctive or representative facial expressions identified by facial expression identifiers or mimics distinctive or representative facial expressions in different key frames at different time instants. The features of the facial expression of both the user and the original replaceable character at the different time instants are extracted. The method(s) described in the present disclosure then dynamically creates a facial image of the character by using an algorithm for example, interpolation and/or extrapolation based on these facial expression features, as described with reference to Figs. 7 and 8.
[0042] Fig. 1 illustrates an example mobile device 100 for creating a video including a character in accordance with the present disclosure. The mobile device includes a camera 101, a display 103, and a processor 105. The camera 101, the display 103 and the processor
105 are connected to each other via a bus 107. The mobile device 100 may also include a microphone 109, and a memory device 111.
[0043] The camera 101 is an optical device that captures photographic images of the user of the mobile device 100. The photographic images captured by the camera 101 are transmitted from the camera 101 to the processor 105 for further processing, or to the memory device 111 for storage.
[0044] The display 103 in this example is a screen to present visual content to the user under control of the processor 105. For example, the display 103 displays images to the user of the mobile device 100. As described above, the images can be those captured by the camera 101, or processed by the processor 105, or retrieved from the memory device 111. Further, the display 103 is able to present a graphic user interface to the user, as shown in Fig. 1. The graphic user interface includes one or more "pages". Each of the pages includes one or more graphic user interface elements, for example, buttons, menus, drop-down list, text boxes, picture boxes, etc. to present visual content to the user or to receive commands from the user, as shown in Fig. 1, which represents one of the pages included in the graphic user interface.
[0045] The display 103 can also be a screen with a touch- sensitive device (not shown in Fig. 1). A virtual keyboard is displayed on the display 103, and the display 103 is able to receive commands through the touch-sensitive device when the user touches the virtual keys of the virtual keyboard, as shown in Fig. 3(c).
[0046] The memory device 111 is a computer-readable medium that stores a computer software product. The memory device 111 can be part of the processor 105, for example, a Random Access Memory (RAM) device, a Read Only Memory (ROM) device, a FLASH memory device, which is integrated with the processor 105.
[0047] The memory device 111 can also be a device separate from the processor, for example, a floppy disk, a hard disk, an optical disk, a USB stick. The memory device 111 can be directly connected to the bus 107 by inserting the memory device 111 into an appropriate interface provided by the bus 107. In another example, the memory device 111 is located remotely and connected to the bus 107 through a communication network (not shown in Fig.
1). The computer software product stored in the memory device 111 is downloaded, through the communication network, to the processor 105 for execution.
[0048] The computer software product includes machine-readable instructions. The processor 105 of the mobile device 100 loads the computer software product from the memory device 111 and reads the machine-readable instructions included in the computer software product. When these machine-readable instructions are executed by the processor 105, these instructions cause the processor 105 to perform one or more method steps described below.
[0049] Fig. 2(a) illustrates an example method 200 for creating a video including a character on the mobile device 100. The method 200 is performed by the processor 105 of the mobile device 100. Particularly, the processor 105 is configured to
create 201 a graphic user interface on the mobile device 100 to capture by the camera 101 multiple photographic facial images of a user for respective multiple facial expressions of the character in the video;
use 203 the multiple photographic facial images to modify stored character images by matching facial features of the character to facial features of the user for the multiple facial expressions of the character in the video; and
create 205 the video based on the modified character images.
[0050] Fig. 2(b) illustrates another example method 210 for creating a video including a character on the mobile device 100. The method 210 is performed by the processor 105 of the mobile device 100. Particularly, the processor 105 is configured to
(a) create 211 a graphic user interface on the mobile device 100 (particularly, the display 103 of the mobile 100) to capture by the camera 101 multiple photographic facial images of a user for respective multiple facial expressions;
(b) extract 213 a user facial feature from each of the multiple photographic facial images;
(c) store 215 associated with a respective facial expression identifier the user facial feature from each of the multiple photographic facial images;
(d) select 217 one of the multiple user facial features based on a first facial expression identifier associated with a first frame of the video;
(e) determine 219 a transformation that transforms a reference facial feature
associated with the first facial expression identifier into an approximation or representation of the selected one of the multiple user facial features;
(f) modify 221, based on the transformation, a reference facial image of the character associated with the first facial expression identifier and the reference facial feature; and
(g) create 223 the first frame of the video based on the modified reference facial image.
[0051] The processor 105 is also configured to present, on the display 103, the first frame of the video in the graphic user interface.
[0052] For a second facial expression identifier associated with a second frame of the video, the processor 105 repeats steps (d) to (g) to create the second frame of the video.
[0053] As can be seen from the above, the first frame of the video is created by modifying the reference facial image of the character with reference to the corresponding user facial feature. Therefore, the original character's visual style is not replaced by a given user's visual style. Instead, the facial expression of the user is used to influence the facial expression of the character. This method enables replacement of certain visual style elements with a given user's own style elements. Although this method is described with reference to facial expressions, the method is also applicable to skin tone, eye colour, etc.
[0054] In the case of skin tone, for example, multiple sets of identical replaceable reference character content are supplied with the reference material package for a given story, with each set differing only in skin tone. In that way, the user alters the reference character's skin tone to mimic their own simply by selecting from a set of alternate skin tone options. The selected set of reference character content is subjected to the similar feature transformation as described above, which creates a character that is more similar in shape and colour to the user's own appearance.
[0055] The content generated by the method(s) described in the present disclosure is significantly personalised for each user, and it is constructed "on demand" in real time from sets of associated asset elements. The resulting content (a sequence of frames) can then be
immediately displayed on a device. Alternatively, the generated content may be used to produce a final multimedia asset such as a static, viewable Youtube asset.
[0056] Fig. 3 illustrates the graphic user interface in accordance with the present disclosure.
[0057] The processor 103 creates 211 a graphic user interface on the mobile device 100 to capture by the camera 101 multiple photographic facial images of a user for respective multiple facial expressions.
[0058] The graphic user interface starts with page (a) as shown in Fig. 3, which presents on the display 103 a movie library consisting of one or more movies. As shown in page (a), there are multiple movies available for the user to choose to work on, for example, "Kong Fu Panda", "Fast Friends", "Frozen", etc. The user chooses "Fast Friends", and the graphic user interface proceeds to page (b). Page (b) shows a plurality of characters in this movie, for example, a boy, a turtle, and a worm. The user can select one of the characters by touching the character. The user can also select one of the characters by entering the name of the character through a virtual keyboard presented in the graphic user interface, as shown in page (c). In the example shown in page (c), the character of the boy is selected by the user. Upon selection of the character, the graphic user interface proceeds to page (d).
[0059] Page (d) shows a list of facial expression identifiers to identify facial expressions. The facial expression identifiers serve the purpose of guiding the user to produce facial expressions identified by the facial expression identifiers. A facial expression identifier can be a text string indicative of the name of a facial expression, for example, "Smile", "Frown", "Gaze", "Surprise", and "Grave", as shown in page (d). The facial expression identifier can include an icon, for example, the icon for facial expression "Gaze". The facial expression identifier can also include a reference facial image of the character extracted from movie, for example, the facial image of the character in a frame of the movie where the character is "surprised", which makes it easier for the user to produce the corresponding facial expression. The facial expression identifier can take other forms without departing from the scope of the present disclosure.
[0060] As shown in page (d), the user is producing a facial expression identified by a text string "Surprise" with a reference facial image of the character. The user recognises the facial
expression identifier and produces the corresponding facial expression. The facial image of the user is captured by the camera 101 and presented in a live view of the graphic user interface. The live view of the user's facial image is positioned next to the camera 101 to alleviate the issue where the user does not appear to look at the camera 101 when the user is looking at the live view. The processor 105 also displays the reference facial image of the character in a character view of the graphic user interface. The live view is also positioned next to the reference facial image of the character to make it easier for the user to compare the facial expression of the user and the facial expression of the character. In another example, the processor 105 further superimposes the live view of the facial image of the user on the reference facial image of the character to make it even easier for the user to compare the facial expression of the user and the facial expression of the character.
[0061] If the user, or another person (for example, a director), is satisfied with the facial expression of the user, the user or the other person clicks on the shutter button of the graphic user interface to capture the photographic facial image of the user. The photographic facial image of the user can be displayed in a picture box associated with the facial expression identifier. For example, the photographic facial images of the user for facial expressions "Smile" and "Frown" are displayed in respective picture boxes, as shown in page (d).
[0062] In another example, instead of taking photos of the user, the processor 105 may retrieve photographic facial images of the user that have been stored in the memory device 111 and associate the photographic facial images with the corresponding facial expression identifiers.
[0063] The photographic facial image of the user is transmitted from the camera 101 to the processor 105. The processor 105 extracts 213 a user facial feature "U4" from the
photographic facial image corresponding to the facial expression identifier "Surprise". The processor 105 stores 215 in a user feature table associated with the facial expression identifier "Surprise" the user facial feature "U4", as shown in the fourth entry of the user feature table below.
Expression ID User Facial Feature User Sound
Smile U1 S1
Frown U2 S2
Gaze U3 S3
Surprise U4 S4
Grave U5 S5
User Feature Tab e
[0064] On page (e) of the graphic user interface, the processor 105 records, through the microphone 109, audio data "S4" associated with the user facial feature "U4". The processor 105 further stores the audio data "S4" in the user feature table in association with the facial expression identifier "Surprise", as shown in the fourth entry of the user feature table below.
[0065] The processor 105 repeats the above steps for each expression identifier in page (d), and populates the user feature table for the character of the boy, which associates the facial expression identifiers with the corresponding user facial features and audio data. For other characters in the movie, the processor 105 can similarly generates respective user feature table for those characters.
[0066] Figs. 4 and 5 illustrate facial features in accordance with the present disclosure.
[0067] Facial features in the present disclosure include a set of control points. Fig. 4(a) represents a facial image of an object, which is captured by a camera. The object in the present disclosure can be a user or a character in a movie. The facial image in Fig. 4(a) shows the object to be generally front-facing such that all key areas of the face are visible: [both] eyes, [both] eyebrows, nose, mouth, and jawline. Ideally, these areas should be largely unobstructed.
[0068] The dots in Fig. 4(b) represents a set of control points extracted by the processor 105. A third party software library is used to extract the set of points from the facial image shown in Fig. 4(a). There are a number of public domain libraries available for this purpose, some of which are based on the open source "OpenCV" library. The set of control points that are extracted from the facial image may comply with an industry standard, for example, MPEG-4, ISO/IEC 14496-1, 14496-2, etc. For example, the control points shown in Fig. 5 comply with the MPEG-4 standard. The facial shape of the object may be reconstructed by connecting those controls with segments. In another example, the facial shape may also be reconstructed by using one or more spline curves that are based on those control points.
[0069] Fig. 6 illustrates a detailed process 400 for creating a video including a character on the mobile device 100 in accordance with the present disclosure.
[0070] For description purposes, a storyline is shown in Fig. 6 to indicate a sequence of facial expression identifiers of the character of the boy over time. Particularly, there are five facial expression identifiers labelled along the storyline at five time instants "A" to "E", which are "Smile", "Gaze", "Frown", "Grave", and "Smile". These facial expression identifiers indicate the facial expressions of the character in the frames at the five time instants. The processor 105 also extracts frames at the five time instants from the video document of the movie "Fast Friends". The frame at time instant "A" contains a facial image of the character that corresponds to the facial expression identified by the facial expression identifier "Smile". The facial image of the character at time instant "A" is also shown in Fig. 6 for description purposes. The processor 105 extracts a facial expression feature "Rl" from the facial image of the character as a reference facial feature associated with the facial expression identifier "Smile". The facial image of the character at time instant "A" is used as a reference facial image associated with the facial expression identifier "Smile" and the reference facial feature "Rl".
[0071] Referring to the user feature table, the processor 105 selects 217 one of the multiple user facial features based on the facial expression identifier "Smile" associated with the frame at time instant "A" in the video. In this example, the processor 105 selects a user facial feature "Ul" since the user facial feature "Ul" is associated with the facial expression identifier "Smile" in the user feature table. The processor 105 may further select audio data "S I" associated with the facial expression identifier "Smile".
[0072] The processor 105 determines 219 a transformation that transforms the reference facial feature "Rl" associated with the facial expression identifier "Smile" into an
approximation or representation of the selected user facial feature "Ul". For example, the transformation can be a transformation matrix that transforms the control points of the reference facial feature "Rl" into an approximation or representation of the control points of the selected user facial feature "Ul".
[0073] The processor 105 modifies 221, based on the transformation, the reference facial image associated with the facial expression identifier "Smile" and the reference facial feature
"Rl". Particularly, the processor 105 may modify the reference facial image by changing the positions of pixels in the reference facial image based on the transformation. The processor 105 then creates 223 the frame at time instant "A" of the video based on the modified reference facial image by for example combining the modified reference facial image and the selected audio data "S I" associated with the facial expression identifier "Smile".
[0074] While the user-recorded audio data may be associated with a facial expression, the audio data may equally be independent from the facial expressions but otherwise associated with the story line. For example, the user may record audio data for what the character says in a particular scene where no facial expression identifier is associated with frames in that scene. It is noted that the proposed methods and systems may perform only the disclosed face modification techniques or only the audio voice-over techniques or both.
[0075] The processor 105 repeats the above process for each of the characters contained in the frame at time instant "A" and/or each of the frames at the five time instants "A" to "E" along the storyline. As a result, the frames at those time instants in the video contain personal expression features of the user, and thus the video becomes more personalised and user- friendly when played, as shown on page (f) of the graphic use interface shown in Fig. 3. It can be seen from page (f) that the shape of the face of the character is more like the user's actual face than the original character's face is.
[0076] Fig. 7 illustrates an example mobile device 700 for creating an output frame for a character in a video in accordance with the present disclosure. The mobile device 700 includes a camera 701, a display 703, and a processor 705. The camera 701, the display 703 and the processor 705 are connected to each other via a bus 707. The mobile device 700 may also include a microphone 709, and a memory device 711.
[0077] The camera 701 is an optical device that captures photographic images of the user of the mobile device 700. The photographic images captured by the camera 701 are transmitted from the camera 701 to the processor 705 for further processing, or to the memory device 711 for storage.
[0078] The display 703 in this example is a screen to present visual content to the user under control of the processor 705. For example, the display 703 displays images to the user of the
mobile device 700. As described above, the images can be those captured by the camera 701, or processed by the processor 705, or retrieved from the memory device 711. Further, the display 703 is able to present a graphic user interface to the user, as shown in Fig. 7.
[0079] The memory device 711 is a computer-readable medium that stores a computer software product. The memory device 711 can be part of the processor 705, for example, a Random Access Memory (RAM) device, a Read Only Memory (ROM) device, a FLASH memory device, which is integrated with the processor 105.
[0080] The memory device 711 can also be a device separate from the processor, for example, a floppy disk, a hard disk, an optical disk, a USB stick. The memory device 711 can be directly connected to the bus 707 by inserting the memory device 711 into an appropriate interface provided by the bus 707. In another example, the memory device 711 is located remotely and connected to the bus 707 through a communication network (not shown in Fig. 7). The computer software product stored in the memory device 711 is downloaded, through the communication network, to the processor 705 for execution.
[0081] The computer software product includes machine-readable instructions. The processor 705 of the mobile device 700 loads the computer software product from the memory device 711 and reads the machine-readable instructions included in the computer software product. When these machine-readable instructions are executed by the processor 705, these instructions cause the processor 705 to perform one or more method steps described below.
[0082] Fig. 8 illustrates an example method 800 for creating an output frame for a character in a video in accordance with the present disclosure. The method 800 is used to create an output frame based on a first reference facial image and a second reference facial image of the character. The first reference facial image of the character is in a first key frame of the video, and the second reference facial image of the character is in a second key frame of the video. The output frame can be a frame between the first key frame and the second key frame along the storyline, or outside the first key frame and the second key frame along the storyline. The method 800 is performed by the processor 705 of the mobile device 700.
[0083] The camera 701 of the mobile device 700 captures a first photographic facial image and a second photographic facial image of the user, and the processor 705 is configured to determine 810 an estimated reference facial feature of the character based on the first reference facial image and the second reference facial image of the character;
determine 820 an estimated user facial feature of the user based on the first photographic facial image and the second photographic facial image of the user;
determine 830 a transformation that transforms the estimated reference facial feature of the character into an approximation or representation of the estimated user facial feature of the user;
modify 840, based on the transformation, a third reference facial image of the character associated with the estimated reference facial feature of the character; and
create 850 the output frame based on the modified third reference facial image.
[0084] The processor 705 is further configured to present the output frame on the display 103.
[0085] As can be seen from the above, the method 800 determines the estimated reference facial feature of the character and the estimated user facial feature of the user, and determines the transformation based on the estimated reference facial feature of the character and the estimated user facial feature of the user. This dramatically reduces the time required to create the output frame. A detailed process for creating the output frame is described below.
[0086] As shown in Fig. 7, two time instants "A", "B" along the storyline are selected by the user or the director as the facial expressions of the character at these time instants are distinctive or representative. The facial expressions of the character at the time instants "A", "B" are identified as "Surprise" and "Grave", respectively. A facial image of the character is extracted from the first key frame at time instant "A", and is referred to as a first reference facial image. A facial image of the character is extracted from the second key frame at time instant "B", and is referred to as a second reference facial image. Both reference facial images of the character are shown in the graphic user interface for the user's reference.
[0087] The processor 705 determines 810 an estimated reference facial feature of the character based on the first reference facial image and the second reference facial image of the character. Particularly, the processor 705 extracts a reference facial feature of the character
from the first reference facial image of the character, referred to as a first reference facial feature. The processor 705 also extracts a reference facial feature of the character from the second reference facial image of the character, referred to as a second reference facial feature.
[0088] The processor 705 further determines a first distance between the first reference facial feature of the first reference facial image and the second reference facial feature of the second reference facial image. The processor 705 determines the estimated reference facial feature of the character based on the first distance, the first reference facial feature and the second reference facial feature. As described above, the first reference facial feature includes a first set of control points, and the second reference facial feature includes a second set of control points. As a result, the first distance is indicative of a distance between the first set of control points and the second set of control points.
[0089] If the output frame is between the first key frame and the second key frame, for example, time instant "C" between time instants "A", "B", the processor 705 determines the estimated reference facial feature of the character by performing an interpolation operation based on the first reference facial feature and the second reference facial feature with respect to the first distance.
[0090] On the other hand, if the output frame is outside the first key frame and the second key frame, the processor 705 determines the estimated reference facial feature of the character by performing an extrapolation operation based on the first reference facial feature and the second reference facial feature with respect to the first distance.
[0091] The user recognises the first facial expression identifier "Surprise" and/or observes the first reference facial image of the character (i.e., the facial image of the character at time instant "A"), and produces a facial expression that corresponds to the first facial expression identifier "Surprise". If the user or the director is satisfied with the facial expression of the user, a facial image of the user is captured by the camera 701, referred to as a first
photographic facial image.
[0092] Similarly, the user recognises the second facial expression identifier "Grave" and/or observes the second reference facial image of the character (i.e., the facial image of the character at time instant "B"), and produces a facial expression that corresponds to the second
facial expression identifier "Grave". If the user or the director is satisfied with the facial expression of the user, a facial image of the user is captured by the camera 701, referred to as a second photographic facial image.
[0093] In another example, instead of taking photos of the user, the processor 705 may retrieve photographic facial images of the user that have been stored in the memory device 711 and associate the photographic facial images with the corresponding facial expression identifiers.
[0094] Both the first photographic facial image and the second photographic facial image of the user are transmitted from the camera 701 to the processor 705.
[0095] The processor 705 determines 820 an estimated user facial feature of a user based on the first photographic facial image and the second photographic facial image of the user. Particularly, the processor 705 extracts a facial feature from the first photographic facial image of the user, referred to as a user first facial feature. The processor 705 also extracts a facial feature from the second photographic facial image of the user, referred to as a user second facial feature.
[0096] The processor 705 further determines a second distance between the user first facial feature and the user second facial feature. The processor 705 determines the estimated user facial feature of the user based on the second distance, the user first facial feature and the user second facial feature. As described above, the user first facial feature includes a third set of control points, and the user second facial feature includes a fourth set of control points. As a result, the second distance is indicative of a distance between the third set of control points and the fourth set of control points.
[0097] If the output frame is between the first key frame and the second key frame, for example, time instant "C" between time instants "A", "B", the processor 705 determines the estimated user facial feature of the user by performing an interpolation operation based on the user first facial feature and the user second facial feature with respect to the second distance.
[0098] On the other hand, if the output frame is outside the first key frame and the second key frame, the processor 705 determines the estimated user facial feature of the user by
performing an extrapolation operation based on the user first facial feature and the user second facial feature with respect to the second distance.
[0099] Fig. 9 illustrates the interpolation process 900 in more detail. In this example, the storyline 901 is annotated with facial expression identifiers and Fig. 9 also shows the corresponding control points of the facial features. The y-axis 902 indicates the y-position of the central control point 903 of the lips. In this example, the storyline evolves from a smile 911 to a frown 912 back to a smile 913 and finally into a frown 914 again. Correspondingly, the control point 903 starts from a low position 921 into a high position 922, back to a low position 923 and finally into a high position 924. For the frames between the smile 911 and the frown 912, processor 705 may interpolate the y-position of control point 903 using a linear interpolation method. In some examples, however, this may lead to an unnatural appearance at the actual transition points, such as a sharp corner at point 922. Therefore, processor 704 may generate a spline interpolation 904 using the y-coordinates of the points 921, 922, 923 and 924 as knots. This results in a smooth transition between the facial expressions. While control point 903 moves only in y-direction in this example, control points are generally allowed to move in both dimensions. Therefore, the spline curve 904 may be a two-dimensional spline approximation of the knots to allow the processor 705 to interpolate both x- and y-coordinates.
[0100] The processor 705 determines 830 a transformation that transforms the estimated reference facial feature of the character into an approximation or representation of the estimated user facial feature of the user. As described above, the transformation can be a transformation matrix that transforms the control points of the estimated reference facial feature of the character into an approximation or representation of the control points of the estimated user facial feature of the user.
[0101] If the output frame is between the first key frame and the second key frame, for example, time instant "C" between time instants "A", "B", the processor 705 determines a further reference facial image of the character by performing an interpolation operation based on the first reference facial image and the second reference facial image of the character, referred to as a third reference facial image. The third reference facial image is associated with the estimated reference facial feature of the character.
[0102] On the other hand, if the output frame is outside the first key frame and the second key frame, the processor 705 determines the third reference facial image of the character by performing an extrapolation operation based on the first reference facial image and the second facial image.
[0103] The processor 705 modifies 840, based on the transformation, the third reference facial image of the character by for example changing the positions of pixels in the third reference facial image. Since the estimated reference facial feature of the character may represent a spline curve, referred to as a first spline curve, and the estimated user facial feature of the user may represent another spline curve, referred to as a second spline curve, modifying the third reference facial image of the character also results in transforming the first spline curve into an approximation or representation of the second spline curve.
[0104] The processor 705 repeats the above steps for each of the characters in the first key frame and the second key frame, and creates 850 the output frame for the characters in the video based on the modified third reference facial images for those characters. For example, the processor 705 may create the output frame by combining the modified third reference facial images into the output frame.
[0105] Once the output frame is created, processor 750 may apply a perspective
transformation on the output frame. Since the output movie is ultimately displayed on the 2D device, processor 750 applies the transformation on 2D coordinates of control points to create the impression of a 3D rotation. Fig. 10(a) shows a transformation of the 2D coordinates of the control points to create the impression of a 3D rotation of the character's face. The degree of rotation may be known from the storyline and therefore, processor 750 calculates a transformation that creates the corresponding impression. This transformation may also be integrated into the previous transformation applied to the reference image. Processor 750 may also create the impression of perspective by down-scaling points that are further away from the virtual camera.
[0106] Fig. 10b illustrates a simplified 3D model of a character's head. This 3D model may be created by a designer or developer once for each character. Based on the 3D model, processor 750 can calculate which control points are not visible because they are occluded by other parts of the head. In the example of Fig. 10(b), the right eye is occluded and not visible.
Applying this calculation to the output image to hide the parts of the images that are not visible according to the 3D model, increases the realistic impression of the created video. The calculation may be based on an assumed pivot point, that may be the top of the neck. The processor 750 can then perform the transformation based on rotation and tilt around the pivot point.
[0107] Both processes in Figs. 10(a) and 10(b) may be performed on the control points only. The reference image can then be transformed as described above, which creates the impression of a 3D rotation of the reference image at the same time as making the reference image similar to the user's face geometry.
[0108] It should be understood that the example methods of the present disclosure might be implemented using a variety of technologies. For example, the methods described herein may be implemented by a series of computer executable instructions residing on a suitable computer readable medium. Suitable computer readable media may include volatile (e.g. RAM) and/or non-volatile (e.g. ROM, disk) memory, carrier waves and transmission media. Exemplary carrier waves may take the form of electrical, electromagnetic or optical signals conveying digital data steams along a local network or a publically accessible network such as internet.
[0109] It should also be understood that, unless specifically stated otherwise as apparent from the following discussion, it is appreciated that throughout the description, discussions utilizing terms such as "determining", "obtaining", or "receiving" or "sending" or
"authenticating" or the like, refer to the action and processes of a computer system, or similar electronic computing device, that processes and transforms data represented as physical (electronic) quantities within the computer system's registers and memories into other data similarly represented as physical quantities within the computer system memories or registers or other such information storage, transmission or display devices.
Claims
1. A method for creating a video on a mobile device that comprises a camera, the method comprising:
creating a graphic user interface on the mobile device to capture by the camera multiple photographic facial images of a user for respective multiple facial expressions of a character in the video;
using the multiple photographic facial images to modify stored character images by matching facial features of the character to facial features of the user for the multiple facial expressions of the character in the video; and
creating the video based on the modified character images.
2. A method for creating a video including a character on a mobile device that comprises a camera, the method comprising:
(a) creating a graphic user interface on the mobile device to capture by the camera multiple photographic facial images of a user for respective multiple facial expressions;
(b) extracting a user facial feature from each of the multiple photographic facial images;
(c) storing associated with a respective facial expression identifier the user facial feature from each of the multiple photographic facial images;
(d) selecting one of the multiple user facial features based on a first facial expression identifier associated with a first frame of the video;
(e) determining a transformation that transforms a reference facial feature associated with the first facial expression identifier into an approximation or representation of the selected one of the multiple user facial features;
(f) modifying, based on the transformation, a reference facial image of the character associated with the first facial expression identifier and the reference facial feature; and
(g) creating the first frame of the video based on the modified reference facial image.
3. The method of claim 2, further comprising:
for a second facial expression identifier associated with a second frame of the video, repeating steps (d) to (g) to create the second frame of the video.
4. The method of claims 2 or 3, wherein the user facial feature comprises a set of control points.
5. The method of any one of preceding claims, wherein the graphic user interface comprises the reference facial image of the character.
6. The method of any one of preceding claims, wherein the graphic user interface comprises a live view of each of the multiple photographic facial images.
7. The method of claim 6, wherein the live view is positioned next to the camera.
8. The method of claim 7, wherein the live view is positioned next to the reference facial image of the character.
9. The method of claim 6, further comprising superimposing the live view on the reference facial image of the character.
10. The method of any one of preceding claims, further comprising selecting the character from a plurality of characters in the video.
11. The method of any one of preceding claims, further comprising recording audio data associated with the user facial feature.
12. A computer software product, including machine-readable instructions, when executed by a processor of a mobile device, causes the processor to perform any one of preceding methods.
13. A mobile device for creating a video including a character, the mobile device comprising
a camera;
a display; and
a processor, the processor configured to
(a) create a graphic user interface on the display of the mobile device to capture by the camera multiple photographic facial images of a user for respective multiple facial expressions;
(b) extract a user facial feature from each of the multiple photographic facial images;
(c) store associated with a respective facial expression identifier the user facial feature from each of the multiple photographic facial images;
(d) select one of the multiple user facial features based on a first facial expression identifier associated with a first frame of the video;
(e) determine a transformation that transforms a reference facial feature associated with the first facial expression identifier into an approximation or representation of the selected one of the multiple user facial features;
(f) modify, based on the transformation, a reference facial image of the character associated with the first facial expression identifier and the reference facial feature;
(g) create the first frame of the video based on the modified reference facial image; and
(h) present, on the display, the first frame of the video in the graphic user interface.
14. A method for creating an output frame for a character in a video, the method comprising:
determining an estimated reference facial feature of the character based on a first reference facial image and a second reference facial image of the character;
determining an estimated user facial feature of a user based on a first photographic facial image and a second photographic facial image of the user;
determining a transformation that transforms the estimated reference facial feature of the character into an approximation or representation of the estimated user facial feature of the user;
modifying, based on the transformation, a third reference facial image of the character associated with the estimated reference facial feature of the character; and
creating the output frame for the character in the video based on the modified third reference facial image.
15. The method of claim 14, wherein determining the estimated reference facial feature of the character comprises:
determining a first distance between a first reference facial feature of the first reference facial image of the character and a second reference facial feature of the second reference facial image of the character; and
determining the estimated reference facial feature of the character based on the first distance, the first reference facial feature and the second reference facial feature.
16. The method of claim 15, wherein determining the estimated reference facial feature of the character comprises performing an interpolation operation based on the first reference facial feature and the second reference facial feature with respect to the first distance.
17. The method of claim 15 or 16, wherein determining the estimated reference facial feature of the character comprises performing an extrapolation operation based on the first reference facial feature and the second reference facial feature with respect to the first distance.
18. The method of any one of claims 15 to 17, wherein the first reference facial feature includes a first set of control points, and the second reference facial feature includes a second set of control points, and the first distance is indicative of a distance between the first set of control points and the second set of control points.
19. The method of any one of the claims 15 to 18, wherein determining the estimated user facial feature of the user comprises:
determining a second distance between a user first facial feature of the first photographic facial image of the user and a user second facial feature of the second photographic facial image of the user; and
determining the estimated user facial feature based on the second distance, the user first facial feature and the user second facial feature.
20. The method of claim 19, wherein determining the estimated user facial feature of the user comprises performing an interpolation operation based on the user first facial feature and the user second facial feature with respect to the second distance.
21. The method of claim 19 or 20, wherein determining the estimated user facial feature of the user comprises performing an extrapolation operation based on the user first facial feature and the user second facial feature with respect to the second distance..
22. The method of any one of claims 19 to 21, wherein the user first facial feature includes a third set of control points, and the user second facial feature includes a fourth set of control points, and the second distance is indicative of a distance between the third set of control points and the fourth set of control points.
23. The method of claim 22, wherein modifying the third reference facial image of the character comprises transforming a first spline curve represented by the estimated reference facial feature of the character into an approximation or representation of a second spline curve represented by the estimated user facial feature of the user.
24. A computer software product, including machine-readable instructions, when executed by a processor of a mobile device, causes the processor to perform any one of preceding methods.
25. A mobile device for creating an output frame for a character in a video, the mobile device comprising
a camera to capture a first photographic facial image and a second photographic facial image of the user;
a display; and
a processor, the processor configured to
determine an estimated reference facial feature of the character based on a first reference facial image and a second reference facial image of the character;
determine an estimated user facial feature of the user based on the first photographic facial image and the second photographic facial image of the user;
determine a transformation that transforms the estimated reference facial feature of the character into an approximation or representation of the estimated user facial feature of the user;
modify, based on the transformation, a third reference facial image of the character associated with the estimated reference facial feature of the character;
create the output frame based on the modified third reference facial image; and
present the output frame on the display.
Priority Applications (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
US16/320,966 US11003898B2 (en) | 2016-07-25 | 2017-07-25 | Creating videos with facial expressions |
US17/187,604 US20210264139A1 (en) | 2016-07-25 | 2021-02-26 | Creating videos with facial expressions |
Applications Claiming Priority (8)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
US201662366375P | 2016-07-25 | 2016-07-25 | |
US201662366406P | 2016-07-25 | 2016-07-25 | |
AU2016902921 | 2016-07-25 | ||
AU2016902919A AU2016902919A0 (en) | 2016-07-25 | Creating videos with facial expressions | |
US62/366,406 | 2016-07-25 | ||
AU2016902919 | 2016-07-25 | ||
US62/366,375 | 2016-07-25 | ||
AU2016902921A AU2016902921A0 (en) | 2016-07-25 | Modifying facial expressions in videos |
Related Child Applications (2)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
US16/320,966 A-371-Of-International US11003898B2 (en) | 2016-07-25 | 2017-07-25 | Creating videos with facial expressions |
US17/187,604 Continuation US20210264139A1 (en) | 2016-07-25 | 2021-02-26 | Creating videos with facial expressions |
Publications (1)
Publication Number | Publication Date |
---|---|
WO2018018076A1 true WO2018018076A1 (en) | 2018-02-01 |
Family
ID=61015160
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
PCT/AU2017/050763 WO2018018076A1 (en) | 2016-07-25 | 2017-07-25 | Creating videos with facial expressions |
Country Status (1)
Country | Link |
---|---|
WO (1) | WO2018018076A1 (en) |
Cited By (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN111476871A (en) * | 2020-04-02 | 2020-07-31 | 百度在线网络技术(北京)有限公司 | Method and apparatus for generating video |
Citations (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20130147788A1 (en) * | 2011-12-12 | 2013-06-13 | Thibaut WEISE | Method for facial animation |
US20130215113A1 (en) * | 2012-02-21 | 2013-08-22 | Mixamo, Inc. | Systems and methods for animating the faces of 3d characters using images of human faces |
US20160275341A1 (en) * | 2015-03-18 | 2016-09-22 | Adobe Systems Incorporated | Facial Expression Capture for Character Animation |
-
2017
- 2017-07-25 WO PCT/AU2017/050763 patent/WO2018018076A1/en active Application Filing
Patent Citations (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20130147788A1 (en) * | 2011-12-12 | 2013-06-13 | Thibaut WEISE | Method for facial animation |
US20130215113A1 (en) * | 2012-02-21 | 2013-08-22 | Mixamo, Inc. | Systems and methods for animating the faces of 3d characters using images of human faces |
US20160275341A1 (en) * | 2015-03-18 | 2016-09-22 | Adobe Systems Incorporated | Facial Expression Capture for Character Animation |
Cited By (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN111476871A (en) * | 2020-04-02 | 2020-07-31 | 百度在线网络技术(北京)有限公司 | Method and apparatus for generating video |
US11670015B2 (en) | 2020-04-02 | 2023-06-06 | Baidu Online Network Technology (Beijing) Co., Ltd. | Method and apparatus for generating video |
CN111476871B (en) * | 2020-04-02 | 2023-10-03 | 百度在线网络技术(北京)有限公司 | Method and device for generating video |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
US20210264139A1 (en) | Creating videos with facial expressions | |
US11410457B2 (en) | Face reenactment | |
US9626788B2 (en) | Systems and methods for creating animations using human faces | |
US7859551B2 (en) | Object customization and presentation system | |
US8655152B2 (en) | Method and system of presenting foreign films in a native language | |
CN112822542A (en) | Video synthesis method and device, computer equipment and storage medium | |
US20140240324A1 (en) | Training system and methods for dynamically injecting expression information into an animated facial mesh | |
US8135724B2 (en) | Digital media recasting | |
US20100079491A1 (en) | Image compositing apparatus and method of controlling same | |
US11581020B1 (en) | Facial synchronization utilizing deferred neural rendering | |
EP3912136A1 (en) | Systems and methods for generating personalized videos with customized text messages | |
US20240087204A1 (en) | Generating personalized videos with customized text messages | |
WO2019089097A1 (en) | Systems and methods for generating a summary storyboard from a plurality of image frames | |
US11582519B1 (en) | Person replacement utilizing deferred neural rendering | |
US20180143741A1 (en) | Intelligent graphical feature generation for user content | |
US10748579B2 (en) | Employing live camera feeds to edit facial expressions | |
CN113542624A (en) | Method and device for generating commodity object explanation video | |
Seymour et al. | Beyond deep fakes | |
KR20160010810A (en) | Realistic character creation method and creating system capable of providing real voice | |
WO2018018076A1 (en) | Creating videos with facial expressions | |
KR102622709B1 (en) | Method and Apparatus for generating 360 degree image including 3-dimensional virtual object based on 2-dimensional image | |
JP7578209B1 (en) | Image generation system, image generation method, and image generation program | |
KR100965622B1 (en) | Method and Apparatus for making sensitive character and animation | |
Hetayothin | Protanopia: An Alternative Reading Experience of a Digital Comic | |
CN118536616A (en) | Machine-learning diffusion model with image encoder for synthetic image generation |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
121 | Ep: the epo has been informed by wipo that ep was designated in this application |
Ref document number: 17833091 Country of ref document: EP Kind code of ref document: A1 |
|
NENP | Non-entry into the national phase |
Ref country code: DE |
|
122 | Ep: pct application non-entry in european phase |
Ref document number: 17833091 Country of ref document: EP Kind code of ref document: A1 |