[go: up one dir, main page]
More Web Proxy on the site http://driver.im/

GB2342026A - Graphics and image processing system - Google Patents

Graphics and image processing system Download PDF

Info

Publication number
GB2342026A
GB2342026A GB9820633A GB9820633A GB2342026A GB 2342026 A GB2342026 A GB 2342026A GB 9820633 A GB9820633 A GB 9820633A GB 9820633 A GB9820633 A GB 9820633A GB 2342026 A GB2342026 A GB 2342026A
Authority
GB
United Kingdom
Prior art keywords
image
sequence
generate
frames
model
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
GB9820633A
Other versions
GB2342026B (en
GB9820633D0 (en
Inventor
Andrew Louis Charles Berend
Mark Jonathan Williams
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
LUVVY Ltd
Anthropics Technology Ltd
Original Assignee
LUVVY Ltd
Anthropics Technology Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by LUVVY Ltd, Anthropics Technology Ltd filed Critical LUVVY Ltd
Priority to GB9820633A priority Critical patent/GB2342026B/en
Publication of GB9820633D0 publication Critical patent/GB9820633D0/en
Priority to AU61041/99A priority patent/AU6104199A/en
Priority to EP99947661A priority patent/EP1116189A1/en
Priority to JP2000571406A priority patent/JP2002525764A/en
Priority to PCT/GB1999/003161 priority patent/WO2000017820A1/en
Publication of GB2342026A publication Critical patent/GB2342026A/en
Application granted granted Critical
Publication of GB2342026B publication Critical patent/GB2342026B/en
Anticipated expiration legal-status Critical
Expired - Fee Related legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T13/00Animation
    • G06T13/203D [Three Dimensional] animation
    • G06T13/403D [Three Dimensional] animation of characters, e.g. humans, animals or virtual beings
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2210/00Indexing scheme for image generation or computer graphics
    • G06T2210/44Morphing

Landscapes

  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Processing Or Creating Images (AREA)
  • Image Analysis (AREA)

Abstract

An image and graphics processing system is provided which can automatically generate an animated sequence f<SP>T</SP> of images of a deformable object by combining a source video sequence f<SP>S</SP>, as a template with a target image as a modifier. The system may be used to simulate hand-drawn and computer-generated animations of characters.

Description

2342026 GRAPHICS AND IMAGE PROCESSING SYSTEM The present invention relates
to a method of and apparatus for graphics and image processing. The invention has particular, although not exclusive, relevance to the image processing of a sequence of source images to generate a sequence of target images. The invention has applications in computer animation and in moving pictures.
Realistic facial synthesis is a key area of research in computer graphics. The applications of facial animation include computer games, video conferencing and character animation for films and advertising. However, realistic facial animation is difficult to achieve because the human face is an extremely complex geometric form.
The paper entitled "Synthesising realistic facial expressions from photographs" by Pighin et al published in Computer Graphics Proceedings Annual Conference Series, 1998, describes one technique which is being investigated for generating synthetic characters. The technique extracts parts of facial expressions from input images and combines these with the original image to generate different facial expressions. The system then uses a morphing technique to animate a change between different facial expressions. The generation of an 2 animated sequence therefore involves the steps of identifying a required sequence of facial expressions (synthetically generating any if necessary) and then morphing between each expression to generate the animated sequence. This technique is therefore relatively complex and requires significant operator input to control the synthetic generation of new facial expressions.
One embodiment of the present invention aims to provide an alternative technique for generating an animated video sequence. The technique can be used to generate realistic facial animations or to generate simulations of hand drawn facial animations.
According to one aspect, the present invention provides an image processing apparatus comprising: means for receiving a source sequence of frames showing a f irst object; means for receiving a target image showing a second object; means for comparing the first object with the second object to generate a difference signal; and means for modifying each frame of the sequence of frames using said difference signal to generate a target sequence of frames showing the second object.
This aspect of the invention can be used to generate 2D animations of objects. It may be used, for example, to animate a hand-drawn character using a video clip of, for 3 example, a person acting out a scene. The technique can also be used to generate animations of other objects, such as other parts of the body and animals.
A second aspect of the present invention provides a graphics processing apparatus comprising: means for receiving a source sequence of threedimensional models of a first object; means for receiving a target model of a second object; means for comparing a model of the first object with the model of the second object to generate a difference signal; and means for modifying each model in the sequence of models for the first object using said difference signal to generate a target sequence of models of the second object.
According to this aspect, three-dimensional models of, for example, a human head can be modelled and animated in a similar manner to the way in which the twodimensional images were animated.
The present invention also provides methods corresponding to the apparatus described above.
Exemplary embodiments of the present invention will now 25 be described with reference to the accompanying drawings in which:
4 Figure 1 is a schematic block diagram illustrating a general arrangement of a computer system which can be programmed to implement the present invention; Figure 2a is a schematic illustration of a sequence of image frames which together form a source video sequence; Figure 2b is a schematic illustration of a target image frame which is to be used to modify the sequence of image frames shown in Figure 2a; Figure 3 is a block diagram of an appearance model generation unit which receives some of the image frames of the source video sequence illustrated in Figure 2a 15 together with the target image frame illustrated in Figure 2b, to generate an appearance model; Figure 4 is a flow chart illustrating the processing steps employed by the appearance model generation unit shown in Figure 3 to generate the appearance model; Figure 5 is a flow diagram illustrating the steps involved in generating a shape model for the training images; Figure 6 shows a head having a number of landmark points placed over it; Figure 7 illustrates the processing steps involved in generating a grey level model from the training images; Figure 8 is a flow chart illustrating the processing steps required to generate the appearance model using the shape and grey level models; Figure 9 shows the head shown in Figure 6 with a mesh of triangles placed over the head; Figure 10 is a plot showing a number of landmark points surrounding a point; Figure 11 is a block diagram of a target video sequence generation unit which generates a target video sequence from a source video sequence using a set of stored difference parameters; Figure 12 is a flow chart illustrating the processing steps involved in generating the difference parameters; Figure 13 is a flow diagram illustrating the processing steps which the target video sequence generation unit shown in Figure 11 performs to generate the target video sequence.
Figure 14a shows three frames of an example source video 6 sequence which is applied to the target video sequence generation unit shown in Figure 11; Figure 14b shows an example target image used to generate 5 a set of difference parameters used by the target video sequence generation unit shown in Figure 11; Figure 14c shows a corresponding three frames from a target video sequence generated by the target video sequence generation unit shown in Figure 11 from the three frames of the source video sequence shown in Figure 14a using the difference parameters generated using the target image shown in Figure 14b; Figure 14d shows a second example of a target image used to generate a set of difference parameters for use by the target video sequence generation unit shown in Figure 11; and Figure 14e shows the corresponding three frames from the target video sequence generated by the target video sequence generation unit shown in Figure 11 when the three frames of the source video sequence shown in Figure 14a are input to the target video sequence generation unit together with the difference parameters calculated using the target image shown in Figure 14d.
Figure 1 is a block diagram showing the general 7 arrangement of an image processing apparatus according to an embodiment of the present invention. The apparatus comprises a computer 1 having a central processing unit (CPU) 3 connected to a memory 5 which is operable to store a program defining the sequence of operations of the CPU 3 and to store object and image data used in calculation by the CPU 3.
Coupled to an input port of the CPU 3 there is an input device 7, which in this embodiment comprises a keyboard and a computer mouse. Instead of, or in addition to the computer mouse, another position sensitive input device (pointing device) such as a digitiser with associated stylus may be used.
A frame buffer 9 is also provided and is coupled to the CPU 3 and comprises a memory unit (not shown) arranged to store image data relating to at least one image, for example by providing one (or several) memory location(s) per pixel of the image. The value stored in the frame buffer for each pixel defines the colour or intensity of that pixel in the image. In this embodiment, the images are represented by 2-D arrays of pixels, and are conveniently described in terms of cartesian coordinates, so that the position of a given pixel can be described by a pair of x-y coordinates. This representation is convenient since the image is displayed on a raster scan 4 display 11. Therefore, the x-coordinate maps to the distance along the line of the display and the ycoordinate maps to the number of the line. The frame buffer 9 has sufficient memory capacity to store at least one image. For example, for an image having a resolution of 1000 x 1000 pixels, the frame buffer 9 includes 106 pixel locations, each addressable directly or indirectly in terms of pixel coordinates x,y.
In this embodiment, a video tape recorder (VTR) 13 is also coupled to the frame buffer 9, for recording the image or sequence of images displayed on the display 11. A mass storage device 15, such as a hard disc drive, having a high data storage capacity is also provided and coupled to the memory 5. Also coupled to the memory 5 is a f loppy disc drive 17 which is operable to accept removable data storage media, such as a floppy disc 19 and to transfer data stored thereon to the memory 5. The memory 5 is also coupled to a printer 21 so that generated images can be output in paper form, an image input device 23 such as a scanner or video camera and a modem 25 so that input images and output images can be received from and transmitted to remote computer terminals via a data network, such as the internet.
The CPU 3, memory 5, frame buffer 9, display unit 11 and mass storage device 13 may be commercially available as I a complete system, f or example as an IBM compatible personal computer (PC) or a workstation such as the Spark station available from Sun Microsystems.
A number of embodiments of the invention can be supplied commercially in the form of programs stored on a floppy disc 19 or other medium, or as signals transmitted over a data link, such as the internet, so that the receiving hardware becomes reconf igured into an apparatus embodying the present invention.
In this embodiment, the computer 1 is programmed to receive a source video sequence input by the image input device 23 and to generate a target video sequence from the source video sequence using a target image. In this embodiment, the source video sequence is a video clip of an actor acting out a scene, the target image is an image of a second actor and the resulting target video sequence is a video sequence showing the second actor acting out the scene. The way in which this is achieved in this embodiment will now be described with reference to Figures 2 to 11.
Figure 2a schematically illustrates the sequence of image frames (f') making up the source video sequence. In this embodiment, there are 180 source image frames fS. to fS179 making up the source video sequence. In this embodiment, the frames are black and white images having 500 x 500 pixels, whose value indicates the luminance of the image at that point. Figure 2b schematically illustrates the target image fTj which is used to modify the source video sequence. In this embodiment, the target image is also a black and white image having 500 x 500 pixels, describing the luminance over the image.
In this embodiment, an appearance model is generated for modelling the variations in the shape and grey level (luminance) appearance of the two actors' heads. In this embodiment, the appearance of the head and shoulders of the two actors is modelled. However, for simplicity, in the remaining description reference will only be made to the heads of the two actors. This appearance model is then used to generate a set of difference parameters which describe the main differences between the heads of the two actors. These difference parameters are then used to modify the source video sequence so that the actor in the video sequence looks like the second actor. The modelling technique employed in the present embodiment is similar to the modelling technique described in the paper "Active Shape Models - Their Training and Application" by T.F. Cootes et al, Computer vision and Image Understanding, Vol. 61, No. 1, January pp 38-59, 1995, the contents of which are incorporated herein by reference.
TRAINING In this embodiment, the appearance model is generated from a set of training images comprising a selection of frames from the source video sequence and the target image f rame. In order f or the model to be able to regenerate any head in the video sequence, the training images must include those frames which have the greatest variation in facial expression and 3D pose. In this embodiment, seven frames W3, fS26f fS340' fS 47 0' fS 98 r fS 121 and fS 162) are selected from the source video sequence as being representative of the various different facial expressions and poses of the first actor's face in the video sequence. As shown in Figure 3. these training images are input to an appearance model generation unit 31 which processes the training images in accordance with user input from the user interface 33, to generate the appearance model 35. In this embodiment, the user interface 33 comprises the display 11 and the input device 7 shown in Figure 1. The way in which the appearance model generation unit 31 generates the appearance model 35 will now be described in more detail with reference to Figures 4 to 8.
Figure 4 is a flow diagram illustrating the general processing steps performed by the appearance model generation unit 31 to generate the appearance model 35.
As shown, there are three general steps S1, S3 and S5.
t'71- In step S1, a shape model is generated which models the variability of the head shapes within the training images. In step S3, a grey level model is generated which models the variability of the grey level of the heads in the training images. Finally, in step S5, the shape model and the grey level model are used to generate an appearance model which collectively models the way in which both the shape and the grey level varies within the heads in the training images.
Figure 5 is a flow diagram illustrating the steps involved in generating the shape model in step S1 of Figure 4. As shown, in step S11, landmark points are placed on the heads in the training images (the selected frames from the video sequence and the target image) manually by the user via the user interface 33. In particular, in step S11, each training image is displayed in turn on the display 11 and the user places the landmark points over the head. In this embodiment, 86 landmark points are placed over each head in order to delineate the main features in the head, e.g. the position of the hair line, neck, eyes, nose, ears and mouth. In order to be able to compare training faces, each landmark point is associated with the same point on each face. For example, landmark point LP8 is associated with the bottom of the nose and landmark point LP6 is associated with the left-hand corner of the mouth.
t5 Figure 6 shows an example of one of the training images with the landmark points positioned over the head and the table below identifies each landmark point with its associated position on the head. 5 Landmark Point Associated Position Landmark Associated Position Point 1 LP, Left corner of le LP" Eye, bottom eye LP 2 Right corner of right LP45 Eye, top eye LP3 Chin, bottom LP46 Eye, bottom LP4 Right corner of left LP47 Eyebrow, lower eye LP5 Left corner of right LP48 Eyebrow, upper eye LP, Mouth, left LP49 Cheek, left LP7 Mouth, right LP50 Cheek, right LP8 Nose, bottom LPS1 Eyebrow, lower LP9 Nose, between eyes LP52 Eyebrow, upper LP10 Upper lip, top LPS3 Eyebrow, lower LP, 1 Lower lip, bottom LP54 Eyebrow, upper LP12 Neck, left, top LPS5 Eyebrow, lower LP 13 Neck, right, top LP56 Eyebrow, upper LP14 Face edge left, level LP57 Eyebrow, lower with nose LP15 Face edge LP58 Eyebrow, upper LP16 Face edge right, LP59 Eyebrow, lower level with nose LP17 Face edge Uw Eyebrow, upper LP18 Top of head LP61 Eyebrow, lower LP19 Hair edge LPS2 Lower lip, top LP 20 Hair edge LP63 Centre forehead Or Landmark Point Associated Position Landmark Associated Position Point LP21 Hair edge LP64 Upper lip, top left LP22 Hair edge LP65 Upper lip, top right LP23 Hair edge LP66 Lower lip, bottom right LP24 Hair edge LP67 Lower lip, bottom left LP25 Hair edge LP68 Eye, top left LP26 Hair edge LP69 Eye, top right LP27 Hair edge LP70 Eye, bottom right LP28 Hair edge LP71 Eye, bottom left LP29 Bottom, far left LP72 Eye, top left LP30 Bottom, far right LP73 Eye, top right LP31 Shoulder LP74 Eye, bottom right LP32 Shoulder LP75 Eye, bottom left LP33 Bottom, left LP76 Lower lip, top left LP34 Bottom, middle LP77 Lower fip, top right LP35 Bottom, right LP76 Chin, left LP36 Left forehead LP79 Chin, right LP37 Right forehead LP8, Neck, left LP38 Centre, between L%, Neckline, left eyebrows LP39 Nose, left LP82 Neckline LP40 Nose, right LP83 Neckline, right LP41 Nose edge, left LP84 Neck, right LP42 Nose edge, right LP85 Hair edge LLP43 Eye,top LP86 Hair edge The result of this manual placement of the landmark points is a table of landmark points for each training image, which identifies the (x,y) coordinates of each landmark point within the image. The modelling technique used in this embodiment works by examining the statistics of these coordinates over the training set. In order to be able to compare equivalent points from different images, the heads must be aligned with respect to a common set of axes. This is achieved, in step S13, by iteratively rotating, scaling and translating the set of coordinates for each head so that they all approximately fill the same reference frame. The resulting set of coordinates for each head form a shape vector (x) whose elements correspond to the coordinates of the landmark points within the reference frame. In other words, the shape and pose of each training head is represented by a vector (x) of the following form:
X = [X11 1 YO # X1 1 Y1 1 X2 f Y2 f X851 Y851 T This iterative alignment process is described in detail in the above paper by Cootes et al and will not be described in detail here. The shape model is then generated in step S15 by performing a principal component analysis (PCA) on the set of shape training vectors generated in step S13. An overview of this principal component analysis will now be given. (The reader is directed to a book by W.J. Krzanowski entitled "Principles of Multivariate Analysis - A User' s A Perspective", 1998, (Oxford Statistical Science Series) for a more detailed discussion of principal component analysis.) A principal component analysis of a set of training data finds all possible modes of variation within the training data. Hewever, in this case, since the landmark points on the training heads do not move about independently, i.e. their positions are partially correlated, most of the variation in the training faces can be explained by just a few modes of variation. In this embodiment, the main mode of variation between the training faces is likely to be the difference between the shape of the first actorrs head and the shape of the second actorrs head. The other main modes of variation will describe the changes in shape and pose of the first actor's head within the selected source video frames. The principal component analysis of the shape training vectors x' generates a shape model (matrix PJ which relates each shape vector to a corresponding vector of shape parameters, by:
b.1 = P i - -) (1) ', ( X X where x' is a shape vector, x is the mean shape vector f rom the shape training vectors and b', is a vector of shape parameters f or the shape vector x'. The matrix P, describes the main modes of variation of the shape and pose within the training heads; and the vector of shape parameters (bi.) f or a given input head has a parameter associated with each mode of variation whose value relates Lhe shape of the given input head to the corresponding mode of variation. For example, if the heads in the training images include thin heads, normal width heads and broad heads, then one mode of variation which will be described by the shape model (P.) will have an associated parameter within the vector of shape parameters (b,) which affects, amongst other things, the width of an input head. In particular, this parameter might vary from -1 to +1, with parameter values near -1 being associated with thin heads, with parameter values around 0 being associated with normal width heads and with parameter values near +1 being associated with broad heads.
Therefore, the more modes of variation which are required to explain the variation within the training data, the more shape parameters are required within the shape parameter vector bis. In this embodiment, for the /9 particular training images used, 20 different modes of variation of the shape and pose must be modelled in order to explain 98% of the variation which is observed within the training heads. Therefore, using the shape model (P,), the shape and pose of each head within the training images can be approximated by just 20 shape parameters. As those skilled in the art will appreciate, in other embodiments, more or less modes of variation may be required to achieve the same model accuracy. For example, if the first actor's head does not move or change shape significantly during the video sequence, then fewer modes of variation are likely to be required for the same accuracy.
In addition to being able to determine a set of shape parameters b. for a given shape vector x', equation 1 can be solved with respect to x' to give:
T i X x + P, b, (2) since pSpST equals the identity matrix. Theref ore, by modifying the set of shape parameters (b.), within suitable limits, new head shapes can be generated which will be similar to those in the training set.
6 Once the shape model has been generated, a similar model is generated to model the grey level within the training heads. Figure 7 illustrates the processing steps involved in generating this grey level model. As shown, in step S21, each training head is deformed to the mean shape. This is achieved by warping each head until the corresponding landmark points coincide with the mean landmark points (obtained from x) depicting the shape and pose of the mean head. Various triangulation techniques can be used to def orm. each training head to the mean shape. The preferred way, however, is based on a technique developed by Bookstein based on thin plate splines, as described in "Principle Warps: Thin-Plate Splines and the Decomposition of Deformations" IEEE Transactions Pattern Analysis and Machine Intelligence, Vol. 11, No. 6, pp 567-585, 1989, the contents of which are incorporated herein by reference.
In step S23, a grey level vector (g') is determined for each shape-normalised training head, by sampling the grey level valne at 10,656 evenly distributed points over the shape-normalised head. A principal component analysis of these grey level vectors is then performed in step S25. As with the principal component analysis of the shape training vectors, the principal component analysis' 2 of the grey level vectors generates a grey level model (matrix Pg) which relates each grey level vector to a corresponding vector of shape parameters, by:
bg' = Pg ( g! (3) where g' is a grey level vector, -g is the mean grey level vector from the grey level training vectors and bi. is a vector of grey level parameters for the grey level vector i 9. The matrix P. describes the main modes of variation of the grey level within the shape-normalised training heads. In this embodiment, 30 different modes of variation of the grey level must be modelled in order to explain 98% of the variation which is observed within the shape-normalised training heads. Therefore, using the grey level model (P.), the grey level of each shapenormalised training head can be approximated by just 30 grey level parameters.
In the same way that equation 1 was solved with respect to x', equation 3 can be solved with respect to g' to give:
9 + P T i (4) gbg -0 since PgPg'f equals the identity matrix. Therefore, by modifying the set of grey level parameters (bg), within suitable limits, new shape- normalised grey level faces can be generated which will be similar to those in the 5 training set.
As mentioned above, the shape model and the grey level model are used to generate an appearance model which collectively models the way in which both the shape and the grey level varies within the heads of the training images. A combined appearance model is generated because there are correlations between the shape and grey level variations, which can be used to reduce the number of parameters required to describe the total variation within the training faces by performing a further principal component analysis on the shape and grey level parameters. Figure 8 shows the processing steps involved J. Ln generating the appearance model using the shape and grey level models previously determined. As shown, in step S31, shape parameters (bi,,) and grey level parameters (big) are determined for each training head from equations 1 and 3 respectively. The resulting parameter8 are concatenated and a principal component analysis is performed on the concatenated vectors to determine the appearance model (matrix P,g) such that:
C' = Psg bsg (5) where ci is a vector of appearance parameters controlling both the shape and grey levels and b,,, are the concatenated shape and grey level parameters. In this embodiment, 40 different modes of variation and hence 40 appearance parameters are necessary to model 98% of the variation found in the concatenated shape and grey level parameters. As those skilled in the art will appreciate, this represents a considerable compression over the 86 landmark points and the 10,656 grey level values originally used to describe each head.
HEAD REGENERATION In addition to being able to represent an input head by the 40 appearance parameters (c), it is also possible to use those appearance parameters to regenerate the input head. In particular, by combining equation 5 with equations 1 and 3 above, expressions for the shape vector (xi) and for the grey level vector (g') can be determined as follows:
X X + OSC (6) 9.1 = 7 + Q g C (7) where Q, is obtained from P,, and P.,, and Qg is obtained from P,, and Pg (and where Q, and Qg map the value of c to changes in the shape and shape normalised grey level data). However, in order to regenerate the head.. the shape-free grey level image generated from the vector g' must be warped to take into account the shape of the head as described by the shape vector xl. The way in which this warping of the shape-free grey level image is performed will now be described.
When the shape-free grey level vector (gi) was determined in step S23, the grey level at 10,656 points over the shape-free head was determined. Since each head is deformed to the same mean shape, these 10,656 points are extracted from the same position within each shapenormalised training head. If the position of each of these points is determined in terms of the positions of three landmark points, then the corresponding position of that point in a given face can be determined from the position of the corresponding three landmark points in the given face (which can be found from the generated shape vector x'). In this embodiment, a mesh of triangles is defined which overlays the landmark points such that the corners of each triangle corresponds to one of the landmark points. Figure 9 shows the head shown in Figure 6 with the mesh of triangles placed over the head in accordance with the positions of the landmark points.
Figure 10 shows a single point p located within the triangle formed by landmark points LP,, LPj and LPk. The position of point p relative to the origin (0) of the reference frame can be expressed in terms of the position of the landmark points LPi, LPj and LP.. In particular, the vector between the origin and the point p can be expressed by the following:
V = aP, + bP + cPk (8) p j where a, b and c are scalar values and Pi, Pi and Pk are the vectors describing the positions of the landmark points LPi, LPj and LPk In the shape-normalised heads, the positions of the 10,656 points and the position of the landmark points LP are known, and therefore, the 517, values of a, b and c for each of the 10,656 points can be determined. These values are stored and then used together with the positions of the corresponding landmark points in the given face(determined from the generated shape vector x') to warp the shape-normalised grey level head, thereby regenerating the head from the appearance parameters (c).
TARGET VIDEO SEQUENCE GENERATION A description will now be given of the way in which the target video sequence is generated f rom the source video sequence. As shown in Figure 11, the source video sequence is input to a target video sequence generation unit 51 which processes the source video sequence using a set of difference parameters 53 to generate and to output the target video sequence.
Figure 12 is a flow diagram illustrating the processing steps involved in generating these difference parameters.
As shown, in step S41, the appearance parameters (c,) for an example of the first actor's head (from one of the training images) and the appearance parameters (CA for the second actor's head (from the target image) are determined. This is achieved by determining the shape parameter vector (b.,) and the grey level parameter vector (bg) f or each of the two images and then calculating the corresponding appearance parameters by inserting these shape and grey level parameters into equation 5. In step S43, a set of difference parameters are then generated by subtracting the appearance parameters (c.) f or the first actor's head from the appearance parameters (CT) for the second actor's head, i.e. from:
Cdif "2 CT - CS (9) In order that these difference parameters only represent differences in the general shape and grey level of the two actors' heads, the pose and expression on the first actorfs head in the training image used in step S41 should match, as closely as possible, the pose and expression of the second actor's head in the target image. Therefore, care has to be taken in selecting the source video frame used to calculate the appearance parameters in step S41.
The processing steps required to generate the target video sequence from the source video sequence will now be described in more detail with reference to Figure 13. As shown, in step S51, the appearance parameters (c,,') for the first actor's head in the current video frame are automatically calculated. The way that this is achieved in this embodiment will be described later. In step S53, the difference parameters (Cdif) are added to the appearance parameters for the current source head to 5 generate:
C (10) CTmd S + Odif The resulting appearance parameters (c,,di) are then used in step S55 to regenerate the head for the current video frame. In particular, the shape vector (x') and the shape-normalised grey level vector (gi) are generated from equations 6 and 7 using the modified appearance parameters (Cimod) and then the shape-normalised grey level image generated by the grey level vector (g') is then warped using the 10,656 stored scalar values for a, b and c and the shape vector (xi), in the manner described above, to regenerate the head. In this embodiment, since the resolution of the video frame is 500 x 500 pixels interpolation is used to determine the grey level values for pixels located between the 10,656 points. The regenerated head is then composited, in step S57, into the source video frame to generate a corresponding target video frame. A check is then made, q114 in step S59, to determine whether or not there are any more source video frames. If there are then the processing returns to step S51 where the procedure described above is repeated for the next source video frame. If there are no more source video frames, then the processing ends.
Figure 14 illustrates the results of this animation technique. In particular, Figure 14a shows three frames of the source video sequence, Figure 14b shows the target image (which in this embodiment is computer-gene rated) and Figure 14c shows the corresponding three frames of the target video sequence obtained in the manner described above. As can be seen, an animated sequence of the computer-generated character has been generated from a video clip of a real person and a single image of the c omputer- generated character.
AUTOMATIC GENERATION OF APPEARANCE PARAMETERS In step S51, appearance parameters for the first actor's head in each video frame were automatically calculated. In this embodiment, this is achieved in a two-step process. In the first step, an initial set of appearance parameters for the head is found using a simple and rapid technique. For all but the first frame of the source rA.
video sequence, this is achieved by simply using the appearance parameters (c,i-1) from the preceding video frame (before modification in step S53). As described above, the appearance parameters (c) effectively define the shape and grey level of the head, but they do not define the scale, position and orientation of the head within the video frame. For all but the first frame in the source video sequence, these also can be initially estimated to be the same as those for the head in the preceding frame.
For the first frame, if it is one of the training images input to the appearance model generation unit 31, then the scale, position and orientation of the head within the frame will be known from the manual placement of the landmark points and the appearance parameters can be generated from the shape parameters and the shapenormalised grey level parameters obtained during training. If the first frame is not one of the training images, as in the present embodiment, then the initial estimate of the appearance parameters is set to the mean set of appearance parameters (i. e. all the appearance parameters are zero) and the scale, position and orientation is initially estimated by the user manually placing the mean face over the head in the first frame.
50- In the second step, an iterative technique is used in order to make fine adjustments to the initial estimate of the appearance parameters. The adjustments are made in an attempt to minimise the difference between the head described by the appearance parameters (the model head) and the head in the current video frame (the image head). With 30 appearance parameters, this represents a difficult optimisation problem. However, since each attempt to match the model head to a new image head, is actually a similar optimisation problem, it is possible to learn in advance how the parameters should be changed for a given difference. For example, if the largest differences between the model head and the image head occur at the sides of the head, then this implies that a parameter that adjusts the width of the model head should be adjusted.
In this embodiment, it is assumed that there is a linear relationship between the error (8c) in the appearance parameters (i.e. the change to be made) and the difference (SI) between the model head and the image head, i.e.
6c = A6I (11) In this embodiment, the relationship (A) was found by performing multiple multivariate linear regressions on a large sample of known model displacements (8c) and the corresponding difference images (81). These large sets of random displacements were obtained by perturbing the true model parameters for the images in the training set by a known amount. As well as perturbations in the model parameters, small displacements in the scale, position and orientation were also modelled and included in the regression; for simplicity of notation, the parameters describing scale, position and orientation were regarded simply as extra elements within the vector Sc. In this embodiment, during the training, the difference between the model head and the image head was determined from the difference between the corresponding shape normalised grey level vectors. In particular, for the current location within the video frame, the actual shapenormalised grey level vector g' was determined (in the manner described above with reference to Figure 7) which was then compared with the grey level vector g' obtained from the current appearance parameters using equation 7 above, i.e.
61 = bg = gi - gm (12) After A has been determined from this training phase, an iterative method for solving the optimisation problem can be determined by calculating the grey level difference vector, 8g, for the current estimate of the appearance parameters and then generating a new estimate for the appearance parameters from:
C/ = c - Abg (13) (noting here that the vector c includes the appearance parameters and the parameters defining the current estimate of the scale, position and orientation of the head within the image).
ALTERNATIVE EMBODIMENTS As those skilled in the art will appreciate, a number of modifications can be made to the above embodiment. A number of these modifications will now be described.
In the above embodiment, the target image frame illustrated a computer generated head. This is not essential. For example, the target image might be a hand-drawn head or an image of a real person. Figures Ad and 14e illustrate how an embodiment with a handdrawn character might be used in character animation.
,Ik 0 7--7 In particular, Figure 14d shows a hand-drawn sketch of a character which when combined with the frames from the source video sequence (some of which are shown in Figure 14a) generate a target video sequence, some frames of which are shown in Figure 14e. As can be seen from a comparison of the corresponding frames in the source and target video frames, the hand- drawn sketch has been animated automatically using this technique. As those skilled in the art will appreciate, this is a much quicker and simpler technique for achieving computer animation, as compared with existing systems which require the animator to manually create each frame of the animation. In particular, in this embodiment, all that is required is a video sequence of a real life actor acting out the scene to be animated, together with a single sketch of the character to be animated.
In the above embodiments, the head, neck and shoulders of the first actor in the video sequence was modified using the corresponding head, neck and shoulders f rom the target image. This is not essential. As those skilled in the art will appreciate, only those parts of the image in and around the landmark points will be modified.
Therefore, if the landmark points are only placed in and 1 d21r around the first actorys face, then only the face in the video sequence will be modified. This animation technique can be applied to any part of the body which is deformable and even to other animals and objects. For example, the technique may be applied to just the lips in the video sequence. Such an embodiment could be used in f ilm dubbing applications in order to synchronise the lip movements with the dubbed sound. This animation technique might also be used to give animals and other objects human-like characteristics by combining images of them with a video sequence of an actor.
In the above embodiment, 86 landmark points were placed around the head, neck and shoulders of the test images.
As those skilled in the art will appreciate, more or less landmark points. Similarly, the number of points in the shape-normalised head for which a grey level value is sampled also depends upon the required accuracy of the system.
In the above embodiment, the shape and grey level of the heads in the source video sequence and in the target image were modelled using principal component analysis. As those skilled in the art will appreciate, by modelling the featui.:es of the heads in this way, it is possible to 4 accurately model each head by just a small number of parameters. However, other modelling techniques, such as vector quantisation and wavelet techniques can be used. Furthermore, it is not essential to model each of the heads, however, doing so results in fewer computations being required in order to modify each frame in the source video sequence. In an embodiment where no modelling is performed, the difference parameters could simply be the difference between the location of the landmark points in the target image and in the selected frame from the source video sequence. It may also include a set of different signals indicative of the difference between the grey level values from the corresponding heads.
In the above embodiment, the shape parameters and the grey level parameters were combined to generate the appearance parameters. This is not essential. A separate set of shape difference parameters and grey level difference parameters could be calculated however this is not preferred, since it increases the number of parameters which have to be automatically generated for each source video frame in step S51 described above.
In the above embodiments, the source video sequence and c the target image were both black and white. The present invention can also be applied to colour images. In particular, if each pixel in the source video frames and in the target image has a corresponding red, green and blue pixel value, then instead of sampling the grey level at each of the 10,656 points in the shape-normalised head, the colour embodiment would sample each of the red, green and blue values at those points. The remaining processing steps would essentially be the same except that there would be a colour level model which would model the variations in the colour in the training images. Further, as those skilled in the art will appreciate, the way in which colour is represented in such an embodiment is not important. In particular, is rather than each pixel having a red, green and blue value, they might be represented by a chrominance and a luminance component or by hue, saturation and value components. Both of these embodiments would be simpler than the red, green and blue embodiment, since the image search which is required during the automatic calculation of the appearance parameters in step S51 could be performed using only the luminance or value component. In contrast, in the red, green and blue colour embodiment, each of these terms would have to be considered in the image search.
In the above embodiment, during the automatic generation of the appearance parameters, and in particular during the iterative updating of these appearance parameters using equation 13 above, the grey level value at each of the 10,656 points within the grey level vector obtained for the current location within the video frame and within the corresponding grey level vector obtained from the model were considered at each iteration. In an alternative embodiment, the resolution employed at each iteration might be changed. For example, in the first iteration, the grey level value at 1000 points might be considered to generate the difference vector Sg. Then, in the second iteration, the grey level value at 3000 points might be considered during the determination of the difference vector 6g. Then for subsequent iterations the grey level value at each of the 10,656 points could be considered during the determination of the different vector 6g. By performing the search at difference resolutions, the convergence of the automatically generated appearance parameters for the current head in the source video sequence can be achieved more quickly.
In the above embodiment, a single target image was used to modify the source video sequence. As those skilled in the art will appreciate, two or more images of the 59 second actor could be used during the training of the appearance model and during the generation of the difference parameters. In such an embodiment, during the determination of the difference parameters, each of the target images would be paired with a similar image from the source video sequence and the difference parameters determined from each would be averaged to determine a set of average difference parameters.
In the above embodiment, the difference parameters were determined by comparing the image of the first actor from one of the frames from the source video sequence with the image of the second actor in the target image. In an alternative embodiment, a separate image of the first actor may be provided which does not f orm part of the source video sequence.
In the above embodiments, each of the images in the source video sequence and the target image were two- dimensional images. The above technique could be adapted to work with 3D modelling and animations. In such an embodiment, the training data would comprise a set of 3D models instead of 2D images. Instead of the shape model being a two-dimensional triangular mesh, it would be a three-dimensional triangular mesh. The 3D models in the training set would have to be based on the same standardised mesh, i.e., like the 2D embodiment, they would each have the same number of landmark points with each landmark point being in the same corresponding position in each model. The grey level model would be sampled from the texture image mapped onto the threedimensional triangles formed by the mesh of landmark points. The three-dimensional models may be obtained using a three-dimensional scanner which typically work either by using laser range-finding over the object or by using one or more stereo pairs of cameras. The standardised 3D triangular mesh would then be fitted to the 3D model obtained from the scanner. Once a 3D appearance model has been created from the training models, new 3D models can be generated by adjusting the appearance parameters, and existing 3D models can be animated using the same differencing technique that was used in the two-dimensional embodiment described above.
In the above embodiment, the grey level vector was determined from the shapenormalised head of the first and second actors. Other types of grey level model might be used. For example, a profile of grey level values at each landmark point might be used instead of or in addition to the sampled grey level value across the object. The way in which such profiles might be generated and the way in which the appearance parameters would be automatically found during step S51 in such an embodiment can be found in the above paper by Cootes et al and in the paper entitled "Automatic Interpretation and Coding of Face Images using Flexible Models" by Andreas Lanitis, IEEE Transactions on Pattern Analysis and Machine Intelligence, Vol. 19, No. 7, July 1997, the contents of which are incorporated herein by reference.
During training of the above embodiment, the landmark points were manually placed on each of the training images by the user. In an alternative embodiment, an existing model might be used to automatically locate the appearance parameters on the training faces. Depending on the result of this automatic placement of the landmark points, the user may have to manually adjust the position of some of the landmark points. However, even in this case, the automatic placement of the landmark points would considerably reduce the time required to train the system.
In the above embodiment, during the automatic determination of the appearance parameters for the first frame in the source video sequence, they were initially 4-1 set to be equal to the mean appearance parameters and with the scale position and orientation set by the user. In an alternative embodiment, an initial estimate of the appearance parameters and of the scale, position and orientation of the head within the first frame can be determined from the nearest frame which was a training image (which, in the first embodiment, was frame f S 3) However, this technique might not be accurate enough if the scale, position and/or orientation of the head has moved considerably between the first frame in the sequence and the first frame which was a training image. In this case, an initial estimate for the appearance parameters for the first frame can be the appearance parameters corresponding to the training head which is the most similar to the head in the first frame (determined from a visual inspection), and an initial estimate of the scale, position and orientation of the head within the first frame can be determined by matching the head which can be regenerated from those appearance parameters against the first frame, for various scales, positions and orientations and choosing the scale, position and orientation which provides the best match.
In the above embodiments, a set of difference parameters 25 were identified which describe the main differences fli between the actor in the video sequence and the actor in the target image, which difference parameters were used to modify the video sequence so as to generate a target video sequence showing the second actor. In the embodiment, the set of difference parameters were added to a set of appearance parameters for the current frame being processed. In an alternative embodiment, the difference parameters may be weighted so that, for example, the target video sequence shows an actor having characteristics from both the first and second actors.
In the above embodiment, a target image was used to modify each frame within a video sequence of frames. In an alternative embodiment, the target image might be used to modify a single source image. In this case, the difference parameters might be weighted in the manner described above so that the resulting object in the image is a cross between the object in the source image and the object in the target image. Alternatively, two source images might be provided, with the difference parameters being calculated with respect to one of the source images which are then applied to the second source image in order to generate the desired target image.
IIAL

Claims (78)

  1. CLAIMS:
    An image processing apparatus comprising:
    means for receiving a source image of a first object; means f or receiving a target image of a second object; means for comparing an image of the first object with the image of the second object to generate a difference signal; and means for modifying the source image of the first object using said difference signal to generate a target image having characteristics of the first and second objects.
  2. 2. An image processing apparatus comprising:
    means for receiving a source animated sequence of frames showing a first object; means for receiving a target image showing a second object; means for comparing an image of the first object with the image of the second object to generate a difference signal; and means for modifying the image of the first object in each frame of said sequence of frames using said difference signal to generate a target animated sequence of frames showing the second object.
  3. 3. An apparatus according to claim 2, wherein said first object moves within said animated sequence of frames, and wherein said modifying means is arranged so that the target animated sequence of frames shows the second object moving in a similar manner.
  4. 4. An apparatus according to claim 2 or 3, wherein said first object deforms over the sequence of frames and wherein said modifying means is arranged so that the target animated sequence of frames shows the second object deforming in a similar manner.
  5. 5. An apparatus according to claim 4, wherein said modifying means is operable for adding said difference signal to the image of said first object in each frame of said source animated sequence of frames to generate said target animated sequence of frames.
  6. 6. An apparatus according to any preceding claim, wherein said comparing means is operable to compare a first set of signals characteristic of the image of the first object with a second set of signals characteristic of the image of the second object to generate a set of difference signals.
  7. 7. An apparatus according to claim 6, wherein said modifying means is operable to use said set of difference 0 signals to generate said target animated sequence of frames.
  8. 8. An apparatus according to claim 6 or 7, comprising processing means for processing the image of the second object and the image of the first object in order to generate said first and second sets of signals.
  9. 9. An apparatus according to claim 8, further comprising model means for modelling the visual characteristics of the first and second objects, and wherein said processing means is arranged to generate said first and second sets of signals using said model means.
  10. 10. An apparatus according to claim 9, wherein said model means is operable for modelling the variation of the appearance of the f irst and second objects within the received frames of the source animated sequence of frames and the received target image.
  11. 11. An apparatus according to claim 9 or 10, wherein said modifying means is operable (i) for determining, for the current frame being modified, a set of signals characteristic of the appearance of the first object in the frame using said model; (ii) to combine said set of signals with said difference signal to generate a set of modified signals; and (iii) to regenerate a corresponding frame using the modified set of signals and the model.
  12. 12. An apparatus according to any of claims 9 to 11, wherein said model means is operable for modelling the shape and colour of said first and second objects in said images.
  13. 13. An apparatus according to claim 12, wherein said model means is operable for modelling the shape and grey level of said first and second objects in said images.
  14. 14. An apparatus according to claim 12 or 13, comprising normalisation means for normalising the shape of said first and second objects in said images and wherein said model means is operable for modelling the colour within the shape-normalised first and second objects.
  15. 15. An apparatus according to any of claims 9 to 14, further comprising training means, responsive to the identification of the location of a plurality of points over the first and second objects in a set of training images, for training said model means to model the variation of the position of said points within said set of training images.
    l.-7 q-f
  16. 16. An apparatus according to any of claims 9 to 15, wherein said training images include frames from the source animated sequence of frames and the target image.
  17. 17. An apparatus according to claim 14, 15 or 16, wherein said training means is operable to perform a principal component analysis modelling technique on the set of training images for training said model means.
  18. 18. An apparatus according to claim 17, wherein said training means is operable to perform a principal component analysis on a set of training data indicative of the shape of the objects within the training images for training said model means.
  19. 19. An apparatus according to claim 17 or 18, wherein said training means is operable to perform a principal component analysis on a set of data describing the colour over the objects within the training images for training said model means.
  20. 20. An apparatus according to claim 19 when dependent upon claim 18, wherein said training means is operable to perform a principal component analysis on a set of data obtained using a model obtained from the principal component analysis of the shape and the colour of the objects in the training images in order to train said 7 W model means to model both shape and colour variation within the objects of the training images.
  21. 21. An apparatus according to any of claims 6 to 20, wherein said comparing means is operable to subtract the first set of signals characteristic of the image of the first object from the second set of signals characteristic of the image of the second object in order to generate said set of difference signals.
  22. 22. An apparatus according to any preceding claim, wherein said modifying means comprises means for processing each frame of the source animated sequence of frames in order to generate a set of signals characteristic of the first object in the frame being processed and wherein said modifying means is operable to modify the set of signals for the current frame being processed by combining them with said difference signal.
  23. 23. An apparatus according to any preceding claim, wherein said modifying means is arranged to modify each frame within the source animated sequence of frames in turn, in accordance with the position of the frame within the sequence of frames.
  24. 24. An apparatus according to any preceding claim, wherein said modifying means is arranged to automatically 0 generate said target animated sequence from said source animated sequence and said difference signal.
  25. 25. An apparatus according to any preceding claim, wherein said image of the first object is obtained from a frame of said source animated sequence.
  26. 26. An apparatus according to any preceding claim, wherein said comparing means is arranged to compare a plurality of images of said first object with a plurality of images of said second object in order to generate a corresponding plurality of difference signals which are combined to generate said difference signal.
  27. 27. An apparatus according to claim 26, wherein said difference signal represents the average of said plurality of difference signals.
  28. 28. An apparatus according to any preceding claim, wherein the image of said first object is selected so as to generate a minimum difference signal.
  29. 29. An apparatus according to any preceding claim, wherein at least one of said first and second objects comprises a face.
  30. 30. An apparatus according to any preceding claim, 5,0 wherein said target image comprises an image of a handdrawn or a computer generated face.
  31. 31. A graphics processing apparatus comprising: 5 means f or receiving a source animated sequence of graphics data of a first object; means f or receiving a target set of graphics data of a second object; means for comparing graphics data of the first object with graphics data of the second object to generate a difference signal; and means for modifying the graphics data in the animated sequence of graphics data using said difference signal to generate a target animated sequence of graphics data of the second object.
  32. 32. An apparatus according to claim 31, wherein said graphics data represents a 3D model or a 2D image.
  33. 33. A graphics processing apparatus comprising:
    means for receiving a source animated sequence of 3D models of a first object; means for receiving a target 3D model of a second object; means for comparing a 3D model of the first object with 3D the model of the second object to generate a difference signal; and j means for modifying each 3D model in the sequence 3D of models for the first object using said difference signal to generate a target animated sequence of 3D models for the second object. 5
  34. 34. An image processing apparatus comprising: means for receiving a source sequence of frames recording a first animated object; means for receiving a target image recording a second object; means for comparing an image of the first object with the image of the second object to generate a set of difference signals; and means for modifying the image of the first object in each frame of said sequence of frames using said set of difference signals to generate a target sequence of frames recording the second object animated in a similar manner to the animation of the first object.
  35. 35. An image processing apparatus comprising: means for receiving a source sequence of frames showing a first object which deforms over the sequence of frames; means for receiving a target image showing a second object; means for comparing an image of the first object with the image of the second object to generate a tyv difference signal; and means for modifying the image of the first object in each frame of said sequence of f rames using said difference signal to generate a target sequence of frames showing the second object deforming in accordance with the deformations of the first object.
  36. 36. An image processing apparatus comprising: means for receiving a source sequence of images comprising a first object which deforms over the sequence of images; means for receiving a target image comprising a second object; means for comparing the second object in the target image with the first object in a selected one of said images from said sequence of images and for outputting a comparison result; means for modifying the first object in each image of said source sequence of images using said comparison result to generate a target sequence of images comprising said second object which deforms in a similar manner to the way in which said first object deforms in said source sequence of images.
  37. 37. An apparatus for performing computeranimation, comprising:
    means for receiving signals representative of a film 11.17 97 of a person acting out a scene; means for receiving signals representative of a character to be animated; means for comparing signals indicative of the appearance of the person with signals indicative of the appearance of the character to generate a difference signal; and means for modifying the signals representative of the film using said difference signal to generate modified signals representative of an animated film of the character acting out said scene.
  38. 38. An image processing method comprising the steps of: receiving a source animated sequence of frames showing a first object; receiving a target image showing a second object; comparing an image of the first object with the image of the second object to generate a difference signal; and modifying the image of the first object in each frame of said sequence of frames using said difference signal to generate a target animated sequence of frames showing the second object.
  39. 39. A method according to claim 38, wherein said first object moves within said animated sequence of frames, and wherein said modifying step is such that the target 5C animated sequence of frames shows the second object moving in a similar manner..
  40. 40. A method according to claim 38 or 39, wherein said first object deforms over the sequence of frames and wherein said modifying step is such that the target animated sequence of frames shows the second object deforming in a similar manner.
  41. 41. A method according to any of claims 38 to 40, wherein said modifying step combines said difference signal with the image of said first object in each frame of said source animated sequence of frames to generate said target animated sequence of frames. 15
  42. 42. A method according to claim 41, wherein said modifying step adds said difference signal to the image of said first object in each frame of said source animated sequence of frames to generate said target animated sequence of frames.
  43. 43. A method according to any of claims 38 to 42, wherein said comparing step compares a first set of signals characteristic of the image of the first object with a second set of signals characteristic of the image of the second object to generate a set of difference signals.
  44. 44. A method according to claim 43, wherein said modifying step uses said set of difference signals to generate said target animated sequence of frames.
  45. 45. A method according to claim 43 or 44, comprising the step of processing the image of the second object and the image of the first object in order to generate said first and second sets of signals.
  46. 46. A method according to claim 45, further comprising the step of modelling the visual characteristics of the first and second objects, and wherein said processing step generates said first and second sets of signals using the model generated by said modelling step.
  47. 47. A method according to claim 46, wherein said modelling step generates a model which models the variation of the appearance of the f irst and second objects within the received frames of the source animated sequence of frames and the received target image.
  48. 48. A method according to claim 46 or 47, wherein said modifying step (i) determines, for the current frame being modified, a set of signals characteristic of the appearance of the first object in the frame using said model; (ii) combines said set of signals with said difference signal to generate a set of modified signals; úC and (iii) regenerates a corresponding frame using the modified set of signals and the model.
  49. 49. A method according to any of claims 46 to 48, wherein said modelling step generates a model which models the shape and colour of said first and second objects in said images.
  50. 50. A method according to claim 49, wherein said modelling step generates a model which models the shape and grey level of said first and second objects in said images.
  51. 51. A method according to claim 49 or 50, comprising the step of normalising the shape of said first and second objects in said images and wherein said modelling step generates a model which models the colour within the shape-normalised first and second objects.
  52. 52. A method according to any of claims 46 to 51, further comprising the steps of (i) identifying the location of a plurality of points over the first and second objects in a set of training images; (ii) and training said model to model the variation of the position of said points within said set of training images.
    6,1
  53. 53. A method according to any of claims 46 to 52, wherein said training images include frames from the source animated sequence of frames and the target image.
  54. 54. A method according to claim 51, 52 or 53, wherein said training step performs a principal component analysis modelling technique on the set of training images to train said model.
  55. 55. A method according to claim 54, wherein said training step performs a principal component analysis on a set of training data indicative of the shape of the objects within the training images to train said model.
  56. 56. A method according to claim 54 or 55, wherein said training step performs a principal component analysis on a set of data describing the colour over the objects within the training images to train said model.
  57. 57. A method according to claim 56 when dependent upon claim 55, wherein said training step performs a principal component analysis on a set of data obtained using the models obtained from the principal component analysis of the shape and the colour of the objects in the training images in order to train said model to model both shape and colour variation within the objects of the training images.
    49
  58. 58. A method according to any of claims 43 to 57, wherein said comparing step subtracts the first set of signals characteristic of the image of the first object from the second set of signals characteristic of the image of the second object in order to generate said set of difference signals.
  59. 59. A method according to any of claims 38 to 58, wherein said modifying step comprises the step of processing each frame of the source animated sequence of frames in order to generate a set of signals characteristic of the first object in the frame being processed and wherein said modifying step modifies the set of signals for the current frame being processed by combining them with said difference signal.
  60. 60. A method according to any of claims 38 to 59, wherein said modifying step is arranged to modify each frame within the source animated sequence of frames in turn, in accordance with the position of the frame within the sequence of frames.
  61. 61. A method according to any of claims 38 to 60, wherein siid modifying step automatically generates said target animated sequence from said source animated sequence and said difference signal.
    C
  62. 62. A method according to any of claims 38 to 61, wherein said image of the first object is obtained from a frame of said source animated sequence.
  63. 63. A method according to any of claims 38 to 62, wherein said comparing step compares a plurality of images of said first object with a plurality of images of said second object in order to generate a corresponding plurality of difference signals which are combined to generate said difference signal.
  64. 64. A method according to claim 63, wherein said difference signal represents the average of said plurality of difference signals.
  65. 65. A method according to any of claims 38 to 64, wherein the image of said first object is selected so as to generate a minimura difference signal.
  66. 66. A method according to any of claims 38 to 65, wherein at least one of said first and second objects comprises a face.
  67. 67. A method according to any of claims 38 to 66, wherein said target image comprises an image of a handdrawn or a computer generated face.
    C)
  68. 68. A graphics processing method comprising the steps of:
    inputting a source animated sequence of graphics data for a first object; comparing graphics data for the first object with graphics data for a second object to generate a difference signal; and modifying the graphics data in the animated sequence of graphics data using said difference signal to generate a target animated sequence of graphics data for the second object.
  69. 69. A method according to claim 68, wherein said graphics data represents a 3D model or a 2D image. 15
  70. 70. A graphics processing method comprising the steps of: receiving a source animated sequence of 3D models of a first object; 20 receiving a target 3D model of a second object; comparing a 3D model of the first object with 3D the model of the second object to generate a difference signal; and modifying each 3D model in the sequence 3D of models for the first object using said difference signal to generate a target animated sequence of 3D models for the second object.
    Q
  71. 71. An image processing method comprising the steps of:
    receiving a source sequence of frames showing a first animated object; receiving a target image showing a second object; comparing an image of the first object with the image of the second object to generate a set of difference signals; and modifying the image of the first object in each frame of said sequence of frames using said set of difference signals to generate a target sequence of frames showing the second object animated in a similar manner to the animation of the first object.
  72. 72. An image processing method comprising the steps of: 15 receiving a source sequence of frames showing a first object which deforms over the sequence of frames; receiving a target image showing a second object; comparing an image of the first object with the image of the second object to generate a difference signal; and modifying the image of the first object in each frame of said sequence of frames using said difference signal to generate a target sequence of frames showing the second object deforming in accordance with the deformations of the first object.
  73. 73. An image processing method comprising the steps of:
    ' 'z- receiving a source sequence of images comprising a first object which deforms over the sequence of images; receiving a target image comprising a second object; comparing the second object in the target image with the first object in a selected one of said images from said sequence of images and for outputting a comparison result; modifying the first object in each image of said source sequence of images using said comparison result to generate a target sequence of images comprising said second object which deforms in a similar manner to the way in which said first object deforms in said source sequence of images.
  74. 74. A computer animation method, comprising the steps of: receiving signals representative of a film of a person acting out a scene; receiving signals representative of a character to be animated; comparing signals indicative of the appearance of the person with signals indicative of the appearance of the character to generate a difference signal; and modifying the signals representative of the film using said difference signal to generate modified signals representative of an animated film of the character acting out said scene.
  75. 75. An apparatus according to any of claims 1 to 37, wherein said modifying means is operable to apply a weighting to said difference signal and to generate said target image using said weighted difference signal.
  76. 76. A storage medium storing processor implementable instructions for controlling a processor to carry out the method of any one of claims 38 to 74.
  77. 77. An electromagnetic or acoustic signal carrying processor implementable instructions for controlling a processor to carry out the method of any one of claims 38 to 74.
  78. 78. A graphics processing method or apparatus substantially as hereinbefore described with reference to or as shown in any of Figures 1 to 14.
GB9820633A 1998-09-22 1998-09-22 Graphics and image processing system Expired - Fee Related GB2342026B (en)

Priority Applications (5)

Application Number Priority Date Filing Date Title
GB9820633A GB2342026B (en) 1998-09-22 1998-09-22 Graphics and image processing system
AU61041/99A AU6104199A (en) 1998-09-22 1999-09-22 Graphics and image processing system
EP99947661A EP1116189A1 (en) 1998-09-22 1999-09-22 Graphics and image processing system
JP2000571406A JP2002525764A (en) 1998-09-22 1999-09-22 Graphics and image processing system
PCT/GB1999/003161 WO2000017820A1 (en) 1998-09-22 1999-09-22 Graphics and image processing system

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
GB9820633A GB2342026B (en) 1998-09-22 1998-09-22 Graphics and image processing system

Publications (3)

Publication Number Publication Date
GB9820633D0 GB9820633D0 (en) 1998-11-18
GB2342026A true GB2342026A (en) 2000-03-29
GB2342026B GB2342026B (en) 2003-06-11

Family

ID=10839275

Family Applications (1)

Application Number Title Priority Date Filing Date
GB9820633A Expired - Fee Related GB2342026B (en) 1998-09-22 1998-09-22 Graphics and image processing system

Country Status (5)

Country Link
EP (1) EP1116189A1 (en)
JP (1) JP2002525764A (en)
AU (1) AU6104199A (en)
GB (1) GB2342026B (en)
WO (1) WO2000017820A1 (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20160063746A1 (en) * 2014-08-27 2016-03-03 Fujifilm Corporation Image combining apparatus, image combining method and non-transitory computer readable medium for storing image combining program

Families Citing this family (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6950104B1 (en) * 2000-08-30 2005-09-27 Microsoft Corporation Methods and systems for animating facial features, and methods and systems for expression transformation
JP2002230543A (en) 2000-11-28 2002-08-16 Monolith Co Ltd Method and device for interpolating image
JP2002232908A (en) 2000-11-28 2002-08-16 Monolith Co Ltd Image interpolation method and device
US20040114731A1 (en) * 2000-12-22 2004-06-17 Gillett Benjamin James Communication system
US20040135788A1 (en) * 2000-12-22 2004-07-15 Davidson Colin Bruce Image processing system
CN110428390B (en) * 2019-07-18 2022-08-26 北京达佳互联信息技术有限公司 Material display method and device, electronic equipment and storage medium
JP7579674B2 (en) 2019-11-07 2024-11-08 ハイパーコネクト リミテッド ライアビリティ カンパニー Image conversion device and method, and computer-readable recording medium

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US4952051A (en) * 1988-09-27 1990-08-28 Lovell Douglas C Method and apparatus for producing animated drawings and in-between drawings
US5267334A (en) * 1991-05-24 1993-11-30 Apple Computer, Inc. Encoding/decoding moving images with forward and backward keyframes for forward and reverse display
US5353391A (en) * 1991-05-06 1994-10-04 Apple Computer, Inc. Method apparatus for transitioning between sequences of images
EP0664527A1 (en) * 1993-12-30 1995-07-26 Eastman Kodak Company Method and apparatus for standardizing facial images for personalized video entertainment
WO1996017323A1 (en) * 1994-11-30 1996-06-06 California Institute Of Technology Method and apparatus for synthesizing realistic animations of a human speaking using a computer
US5619628A (en) * 1994-04-25 1997-04-08 Fujitsu Limited 3-Dimensional animation generating apparatus
US5692117A (en) * 1990-11-30 1997-11-25 Cambridge Animation Systems Limited Method and apparatus for producing animated drawings and in-between drawings

Family Cites Families (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP0664526A3 (en) * 1994-01-19 1995-12-27 Eastman Kodak Co Method and apparatus for three-dimensional personalized video games using 3-D models and depth measuring apparatus.
GB9811695D0 (en) * 1998-06-01 1998-07-29 Tricorder Technology Plc Facial image processing method and apparatus

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US4952051A (en) * 1988-09-27 1990-08-28 Lovell Douglas C Method and apparatus for producing animated drawings and in-between drawings
US5692117A (en) * 1990-11-30 1997-11-25 Cambridge Animation Systems Limited Method and apparatus for producing animated drawings and in-between drawings
US5353391A (en) * 1991-05-06 1994-10-04 Apple Computer, Inc. Method apparatus for transitioning between sequences of images
US5267334A (en) * 1991-05-24 1993-11-30 Apple Computer, Inc. Encoding/decoding moving images with forward and backward keyframes for forward and reverse display
EP0664527A1 (en) * 1993-12-30 1995-07-26 Eastman Kodak Company Method and apparatus for standardizing facial images for personalized video entertainment
US5619628A (en) * 1994-04-25 1997-04-08 Fujitsu Limited 3-Dimensional animation generating apparatus
WO1996017323A1 (en) * 1994-11-30 1996-06-06 California Institute Of Technology Method and apparatus for synthesizing realistic animations of a human speaking using a computer

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
pp75-84, esp. Fig. 8. *
'Synthasising realistic facial expressions from photographs'Pighin et al,SIGGRAPH 98 Conf. Procs *

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20160063746A1 (en) * 2014-08-27 2016-03-03 Fujifilm Corporation Image combining apparatus, image combining method and non-transitory computer readable medium for storing image combining program
US9786076B2 (en) * 2014-08-27 2017-10-10 Fujifilm Corporation Image combining apparatus, image combining method and non-transitory computer readable medium for storing image combining program

Also Published As

Publication number Publication date
GB2342026B (en) 2003-06-11
JP2002525764A (en) 2002-08-13
EP1116189A1 (en) 2001-07-18
AU6104199A (en) 2000-04-10
WO2000017820A1 (en) 2000-03-30
GB9820633D0 (en) 1998-11-18

Similar Documents

Publication Publication Date Title
US7098920B2 (en) Methods and systems for animating facial features and methods and systems for expression transformation
US7363201B2 (en) Facial image processing methods and systems
US6556196B1 (en) Method and apparatus for the processing of images
Pighin et al. Modeling and animating realistic faces from images
US6278460B1 (en) Creating a three-dimensional model from two-dimensional images
US5758046A (en) Method and apparatus for creating lifelike digital representations of hair and other fine-grained images
Pighin et al. Realistic facial animation using image-based 3D morphing
Fua et al. Animated heads from ordinary images: A least-squares approach
Ye et al. 3d morphable face model for face animation
GB2342026A (en) Graphics and image processing system
US20030146918A1 (en) Appearance modelling
GB2359971A (en) Image processing system using hierarchical set of functions
GB2360183A (en) Image processing using parametric models
Kang A structure from motion approach using constrained deformable models and appearance prediction
Kalinkina et al. 3d reconstruction of a human face from images using morphological adaptation
Walder et al. Markerless tracking of dynamic 3D scans of faces
Lin et al. High resolution calibration of motion capture data for realistic facial animation
ed eric Pighin Modeling and Animating Realistic Faces from Images
Lewis Siggraph 2005 course notes-Digital Face Cloning Audience Perception of Clone Realism

Legal Events

Date Code Title Description
PCNP Patent ceased through non-payment of renewal fee

Effective date: 20040922