[go: up one dir, main page]
More Web Proxy on the site http://driver.im/

WO2010001109A2 - Method of generating motion capture data and/or animation data - Google Patents

Method of generating motion capture data and/or animation data Download PDF

Info

Publication number
WO2010001109A2
WO2010001109A2 PCT/GB2009/001636 GB2009001636W WO2010001109A2 WO 2010001109 A2 WO2010001109 A2 WO 2010001109A2 GB 2009001636 W GB2009001636 W GB 2009001636W WO 2010001109 A2 WO2010001109 A2 WO 2010001109A2
Authority
WO
WIPO (PCT)
Prior art keywords
data
subject
motion capture
animation
collected
Prior art date
Application number
PCT/GB2009/001636
Other languages
French (fr)
Other versions
WO2010001109A3 (en
Inventor
Ali Kord
Original Assignee
Berlin-Armstrong Locatives Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Berlin-Armstrong Locatives Ltd filed Critical Berlin-Armstrong Locatives Ltd
Publication of WO2010001109A2 publication Critical patent/WO2010001109A2/en
Publication of WO2010001109A3 publication Critical patent/WO2010001109A3/en

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T13/00Animation
    • G06T13/203D [Three Dimensional] animation
    • G06T13/403D [Three Dimensional] animation of characters, e.g. humans, animals or virtual beings

Definitions

  • the ' present invention relates to a method of generating motion capture data and/or animation data. More particularly, the present invention relates to a method of generating motion capture data and/or animation data from data derived from an inertial motion measurement apparatus.
  • the data may be used as basis for computer animation, for example to be used in motion pictures or video games.
  • One known technique utilises inertial sensors to measure the movements of a subject.
  • the inertial sensors are typically provided on the limbs and torso of the subject and the measured data may be used to determine relative movement of the sensors.
  • Motion capture data relating to the movements of the subject may be generated from the measured data.
  • Inertial motion measurement systems or inertial motion capture (mocap) systems are heavily dependent on two resources, namely: a) the presence and accuracy of the hardware employed; and b) the presence and accuracy of the representation of the subject's limb measurements which must predict the time of the collision of the same into worldly objects like floors, steps, walls, tables, seats, other people, etc. It is relatively straightforward to secure the best hardware performance possible.
  • the present invention at least in preferred embodiments attempts to overcome or ameliorate at least some of the problems associated with known motion capture techniques and systems.
  • the present invention relates to a method of generating motion capture data and/or animation data, the method comprising the steps of: providing subject data; collecting movement data relating to a subject's movements whilst the subject performs one or more actions; after the movement data has been collected modifying the subject data; and utilising the modified subject data to generate the motion capture data and/or animation data.
  • the ability to modify the subject data after the movement data has been collected is particularly advantageous since the subject data may be adapted or modified to match the subject after the movement data has been collected.
  • Prior art techniques require that the subject data must be matched to the subject before the movement data is collected and this may lead to unacceptable delays.
  • the present invention allows the subject data to be modified after the movement data has been collected, potentially avoiding delays which may otherwise occur as the subject data is matched to the subject.
  • the present invention may provide a dynamic array data collection method.
  • the method is preferably for use with inertial motion measurement apparatus.
  • the movement data is preferably collected from inertial motion measurement apparatus.
  • the inertial motion measurement apparatus may comprise at least one inertial sensor, such as a gyroscope, to collect said movement data.
  • inertial sensors are provided on the subject's limbs on either side of a joint.
  • a first inertial sensor may be provided on an arm between the shoulder and the elbow and a second inertial sensor may be provided between the elbow and the wrist.
  • the data collected from said first and second inertial sensors may be used to calculate the position and orientation of the wrist.
  • the subject data is preferably stored in a subject data file.
  • the subject data may comprise data relating to the size and/or dimensions of said subject.
  • the subject data may initially approximate the subject.
  • the subject data may comprise data relating to the length and/or thickness of each limb of the subject.
  • the subject data may comprise data relating to mass and/or density to allow improved physics modelling.
  • the subject data may comprise data relating to the location of a centre of gravity of the subject.
  • the location of the centre of gravity may be calculated from said subject data.
  • the subject data may be provided before the movement data is collected.
  • a default subject data file may be provided to allow an approximate rendering of the collected movement data to be displayed.
  • the subject data may be generated after the movement data has been collected.
  • a subject data file may be created specifically relating to the subject after the movement data has been collected.
  • the collected movement data is preferably stored in a movement data file which is separate from said subject data file.
  • the subject data file and/or the movement data file are preferably stored independently of the motion capture data and/or animation data.
  • the subject data and the collected movement data may be used to model a virtual character substantially in real time.
  • the modelled virtual character may be displayed on a screen.
  • a physics model may be used to model the virtual character based on the collected movement data.
  • the generated motion capture data and/or animation data may be suitable for animating a virtual character.
  • the virtual character may comprise one or more of the following: a torso, a head and at least one limb.
  • the collected movement data typically corresponds to the respective features of the virtual character.
  • the step of modifying the subject data may comprise tailoring the subject data to relate to or correspond to the subject.
  • the data may be modified to correspond to the respective length and/or thickness of the limbs of the subject.
  • the subject data may be modified to correspond to the subject's torso.
  • the subject may be measured using scanning apparatus, such as a laser scanner.
  • scanning apparatus such as a laser scanner.
  • at least one photograph of the subject may be taken and the required dimensions determined from said at least one photograph.
  • the at least one photograph typically includes a scaling device, such as a frame having known dimensions, to allow accurate measurements to be determined.
  • the subject data is derived from two or more photographs.
  • the motion capture data and/or animation data may be generated directly from the collected movement data in combination with the modified subject data. However, in addition to modifying the subject data, it may be necessary to edit or modify the collected movement data.
  • the motion capture data and/or animation data may be generated from the modified subject data and the modified motion capture data and/or animation data.
  • an operator may modify the generated motion capture data and/or animation data and the collected movement data may be modified in response to the changes to the generated data.
  • the collected movement data may, for example, be modified by performing a rotational and/or translational transform on a core reference point. The rest of the movement data may then be modified based on the transform applied to the core reference point.
  • a modifier may be applied to a set of root or core data to modify related or subsidiary movement data.
  • a modifier may be applied to the movement data collected from one or more root sensors.
  • a plurality of sensors may form a chain or network and the root sensor may be provided at a base of the chain or at a branch in the network.
  • the modifier may relate to the position and/or orientation of the collected movement data.
  • the modified movement data for said root sensor(s) may alter the calculated relative position and/or orientation of subsidiary (linked) sensors.
  • Applying the modifier to data set(s) relating to the root sensor(s) may alter the position and/or orientation of a portion of the virtual character, for example one or more limbs, or the entire virtual character. For example, applying the modifier to the movement data collected from an upper arm sensor would alter the position and/or orientation of the forearm and hand.
  • a jump vector modifier may be applied to the movement data collected from a core reference sensor and the position and/or orientation of the movement data for the remaining sensors determined based on the modified data for the core reference sensor.
  • An artefact correction modifier may be applied to the core reference sensor to correct artefact errors.
  • the core reference sensor is preferably the hip or pelvis sensor.
  • the editing of the collected movement data may be performed to match collision timing.
  • the collected movement data may be edited to scale an angular rotational component of the collected movement data.
  • a component of the collected movement data relating to angular rotation may be modified to increase or decrease the angular rotation.
  • the collected movement data may comprise an angular rotation of 10° for a joint but this may be scaled to a smaller rotation of 9.5° or to a larger rotation of 10.5°.
  • the scaling may enable intermediate angular orientations to be adjusted to reflect the operator-specified changes.
  • the collected movement data may be edited to dictate the respective frames for lift-off and landing of a jump.
  • the collected movement data may be edited to match a measured start and/or end position(s) to a corresponding start and/or end position(s) of the subject for a particular action.
  • the motion capture data and/or animation data relating to the location and/or angular orientation of a limb may be edited to match the start and/or end position of a corresponding limb of the subject for a particular action.
  • the data collected from said at least one inertia) sensor is preferably also used to calculate the location and/or orientation of a reference point.
  • the reference point may, for example, be a point on the torso.
  • the reference point corresponds to the subject's coccyx.
  • the location and/or orientation of the reference point may be calculated using a physics model, for example utilising a friction coefficient and/or a jump coefficient.
  • the reference point may correspond to the position of the root sensor.
  • the method preferably comprises the step of measuring the location and/or orientation of a reference point.
  • the reference point is preferably located on the coccyx of the subject.
  • the reference point may, for example, be tracked.
  • a transmitter or receiver may be provided on the subject to define said reference point.
  • the location of the transmitter or receiver may be tracked by a plurality of receivers or transmitters respectively.
  • a radio or ultrasound signal may be used to measure the location and/or orientation of the reference point.
  • a camera may film the subject and software may track a locator provided on the subject.
  • the method preferably comprises the step of comparing the calculated location of said reference point to the measured location of said reference point. The accuracy of the collected movement data may be determined using this technique.
  • a large discrepancy between the calculated and measured positions may be indicative of a high level of inaccuracy in the collected data.
  • a close correlation between the calculated and measured positions may be indicative of a high level of accuracy in the collected data.
  • the measured location and the calculated location are preferably simultaneously displayed on a screen.
  • the subject data and/or the movement data may be stored in at least one array.
  • one or more transforms may be performed on part or all of the data stored in said at least one array.
  • a translational or rotational transform may be applied to part or all of the stored data.
  • the at least one array may comprise operator-specified values to allow the generated motion capture data and/or animation data to be refined.
  • the operator-specified values may include one or more of the following set: jump prediction constants; static and/or kinetic friction constants; acceleration and/or deceleration constants; and jump vectors. Of course, default values may be pre-programmed.
  • the method may comprise the further step of calculating the centre of weight or the centre of gravity based on the subject data.
  • the present invention relates to a method of generating motion capture data and/or animation data for a character, the method comprising the steps of: collecting movement data from " at least one inertial sensor provided on a subject while the subject performs an action; calculating the location of a reference point on the character based on the collected movement data; and measuring the location of a reference point on said subject.
  • the calculated location of the character reference point may then be compared to the measured location of the subject reference point. This comparison may provide an indication of the accuracy of the method.
  • the calculated and measured reference points are displayed on a screen simultaneously with a rendering of the character.
  • the position of the character reference point on the character preferably corresponds to the position of the subject reference point on the subject.
  • the location of the subject reference point is preferably measured directly.
  • the subject reference point may be tracked while the subject performs said action.
  • a tracking system may be used to track the position of a device located in a predetermined position on the body of the subject.
  • a transmitter may be provided on the subject and a signal transmitted by the transmitter received by a plurality of receivers and used to measure the location of the transmitter.
  • a receiver may be provided on the subject for receiving a signal transmitted from a plurality of transmitters. Triangulation may be used to determine the relative position of the reference point.
  • the method may comprise the further step of modifying the collected movement data to match the calculated location of the reference point to the measured location of the reference point.
  • the reference point may be fixed to the measured location for the animation data.
  • movement data may be calculated based on the data collected, from said at least one inertial sensors in combination with the measured location of said reference point.
  • the core reference sensor may be provided at said reference point such that the position and/or orientation of the other sensors may be determined in relation to the measured position of the reference point.
  • the measured location and the calculated location of said reference point are preferably simultaneously displayed on a screen.
  • the step of modifying the collected movement data may comprise performing a rotational and/or translational transform on a selected portion of said data, for example corresponding to the character's limb, head, torso or pelvis.
  • the rotational and/or translational transform may be applied to the complete data set.
  • the motion capture data and/or the animation data is preferably generated from the collected movement data and subject data.
  • the subject data is preferably modified to relate to said subject after the movement data has been collected.
  • the animation data is preferably generated from the collected movement data and the modified subject data file.
  • the subject data is preferably stored in a subject data file and the movement data stored in a separate movement data file.
  • the handling of the subject data and the processing of the collected movement data is preferably handled by a processor.
  • the present invention also relates to a processor programmed to perform these processing steps.
  • the present invention relates to a computer programme for operating a processor in accordance with these processing steps; or a carrier having such a computer programme stored thereon.
  • the present invention relates to a system for generating motion capture data and/or animation data, the system comprising a motion capture system for collecting movement data relating to a subject performing one or more actions; a subject data modifier for modifying subject data after the movement data has been collected; and a motion capture data and/or animation data generator for generating said motion capture data and/or animation data utilising the modified subject data and the collected movement data.
  • the motion capture system is preferably an inertial motion capture system.
  • the present invention relates to a system for generating motion capture data and/or animation data, the system comprising an inertial motion capture system for collecting movement data from at least one inertial sensor provided on a subject while the subject performs an action; a processor for calculating the location of a reference point on the character based on the collected movement data; a tracking system for tracking a reference point on said subject.
  • the calculated location of the character reference point may be compared to the measured location of the subject reference point.
  • the system may be provided with comparison means for comparing the location of the measured reference point to the location of the calculated reference point. A notification may issue if the comparison means determines that the discrepancy between the measured and calculated reference point positions exceeds a predetermined value.
  • the present invention relates to a method of generating motion capture data and/or animation data, the method comprising the steps of: providing a plurality of sensors on a subject, the sensors being arranged in a chain; collecting movement data from said sensors whilst the subject performs one or more actions; applying a modifier to the movement data collected for one or more sensors in said chain and then calculating the position and/or orientation of the remaining sensors in said chain; and utilising the modified movement data to generate the motion capture data and/or animation data.
  • the processes and systems described herein may be used simultaneously to generate motion capture data and/or animation data for one or more subjects.
  • Figure 1 shows a virtual character in a first position animated using movement data collected from an inertial motion capture system
  • Figure 2 shows a workflow chart illustrating the process according to the present invention
  • Figure 3 shows the virtual character of Figure 1 animated using motion capture data and/or animation data generated using modified movement data in accordance with the present invention
  • Figure 4 shows the virtual character in a second position animated using movement data collected from the inertial motion capture system
  • Figure 5 shows the virtual character of Figure 3 animated using motion capture data and/or animation data generated using modified movement data in accordance with the present invention
  • Figure 6 shows the virtual character in a third position created from a subject data file and collected movement data
  • Figure 7 shows the virtual character of Figure 5 created from a modified data file and the collected movement data
  • Figure 8 shows the virtual character in a fourth position created from a subject data file and collected movement data
  • Figure 9 shows the virtual character of Figure 7 created from a modified data file and the collected movement data;
  • Figures 10, 11 and 12 show the effect of modifying a jump vector on the animation of the virtual character;
  • Figure 13 shows the virtual character in a fourth position with a calculated reference point and a tracked reference point displayed; and Figure 14 shows the virtual character in a fifth position again with the calculated reference point and the tracked reference points displayed.
  • the present invention relates to a method of generating motion capture data and/or animation data based on movement data collected from an inertial motion capture system.
  • the inertial motion capture system generates data relating to the movements of a subject, such as an actor. By combining the collected movement data with subject data, the motion capture data and/or animation data may be generated. The resultant data may be used, for example, to animate a virtual character 1 in three dimensions.
  • the subject data comprises dimensions of the virtual character 1 , including the length and thickness of limbs.
  • the subject data is stored in a dedicated file and may initially only approximate the subject. However, the method according to the present invention allows the subject data to be refined more closely to represent the subject after the movement data has been collected.
  • the virtual character 1 is displayed in relation to a three-dimensional coordinate system having X, Y and Z axes, as illustrated in Figure 1.
  • the virtual character 1 is represented by polygons in the Figures and the ground is represented by a horizontal line 2.
  • the joints of the virtual character 1 are represented by a circle, the centre of the circle representing the pivot point of the joint.
  • the virtual character 1 has a torso 3; a head 5; a pelvis 7; left and right arms 9, 11 ; left and right hands 13, 15; left and right legs 17, 19; and left and right feet 21, 23. It will be understood that more complicated character models may be implemented.
  • the inertial motion capture system comprises a plurality of inertial sensors, such as gyroscopes, provided on the subject.
  • the inertial sensors are provided on the subject's limbs, typically on each side of a joint. At least one inertial sensor is also provided on each of the torso and the head of the subject.
  • a suitable inertial motion capture system is described in International Patent Application No. PCT/GB2007/001565, which is incorporated herein in its entirety by reference.
  • the inertial sensors generate movement data and the collected movement data is used to calculate the relative movement of each sensor and, therefore, the relative movement of the subject's limbs, torso, head and so on.
  • the movement data collected from the inertial sensors could be used directly to animate the virtual character 1. However, this approach may lead to inaccuracies which mean that the resultant animation is not acceptable.
  • the left and right hands 13, 15 of the virtual character 1 are below the level of the ground, represented by the line 2. If the data was used in this form, the hands 13, 15 of the virtual character 1 would seem to disappear into the ground 2 when the virtual character 1 reached this stage of the animation.
  • the inaccuracies in the animation sequence of the virtual character 1 can be due to a variety of reasons. For example, there may be errors in the collected movement data, for example due to skin, muscle and clothing artefacts or due to hardware data collection errors. Further errors may be introduced due to inaccuracies in the subject's measurements which are typically used to create a subject data file for rendering the virtual character 1.
  • the method according to the present invention creates a new file format before the resultant motion capture data and/or animation data is generated.
  • This technique at least in preferred embodiments, enables the process of determining the optimum limb measurements of the subject to be postponed until after the data collection session has been completed. This way the data collected at the session can accept editions at a later time providing new edited resultant data.
  • a method of simplifying the subject's limb measurement process could, for example, involve photographing the subject at the beginning of the data collection session but tailoring the subject data to match the subject in post-production after the movement data has been collected based on the photpgraph(s) of the subject.
  • the system can use anyone else's limb measurement for the time of the capture and ignore incorrect • representations of collisions between the limbs and the material world like floors and steps during the session and upon later editions remove the incorrect collisions.
  • a workflow chart illustrating the process according to the present invention is shown in Figure 2.
  • the movement data is collected from the inertial sensors provided on the subject, as represented by Step A.
  • one or more tracking devices may be provided on the subject and the position of these devices is tracked.
  • the collected movement data and the tracking data are stored in a dedicated file.
  • the collected movement data is combined with the stored subject data to generate keyframe data for animating the virtual character 1.
  • the keyframes typically represent specific events that affect the root position of the virtual character 1 , such as jumping with a specific velocity vector or changing contact points at a specific time.
  • An operator may then review the resultant animation and check for inaccuracies or errors which may have resulted for errors in the subject data file or the collected movement data file.
  • the operator may modify the subject data file after the movement data has been collected and the modified subject data used to generate revised keyframe data. This process may be repeated until the keyframe data is acceptable.
  • the operator may also modify the collected movement data to correct any inaccuracies or errors.
  • the modified subject data may then be combined with the collected movement data to generate motion capture data and/or animation data for each frame, as illustrated by Step D.
  • the operator may edit the keyframes to refine the resultant motion capture data and/or animation data, as illustrated by Step E.
  • the edited keyframes are then used to generate modified motion capture data and/or animation data for each frame.
  • the step of editing the keyframes is an iterative process and may be repeated if required.
  • the generated data may be exported or recorded, as illustrated by Step F.
  • the movement data collected from the inertial motion capture system may lead to inaccuracies such as the incorrect positioning of the virtual character 1 in relation to other objects such as the ground 2, as illustrated in Figure 1.
  • the present invention allows the collected movement data to be modified post-production. Thus, any inaccuracies may be reduced or corrected after the motion capture session has finished.
  • the movement data may be modified to translate the virtual character 1 upwardly such that the hands 13, 15 contact the ground 2.
  • this error is likely to result from inaccuracies in the subject data and the operator should modify the subject data such that the virtual character 1 is displayed with the hands 13, 15 in contact with the ground 2, as shown in Figure 3.
  • the operator may rely on their own judgement to identify any inaccuracies or errors in the resultant animation of the virtual character 1.
  • the animation of the virtual character 1 can be compared to a video of the subject performing the actions to identify obvious defects.
  • the virtual character 1 is shown in Figure 4 in a second position with the right foot 23 lifted off the ground 2 but the operator recognises that both feet 21 , 23 should be on the ground 2.
  • This inaccuracy in the collected movement data may be the result of a clothing artefact or faulty hardware data, for example.
  • the error may be corrected by rotating the virtual character 1 about the Y axis such that the left and right feet 21 , 23 are both placed on the ground 2, as shown in Figure 5.
  • a rotational transform may be applied to the collected movement data to provide the desired rotation about the Y axis.
  • the animation may be corrected on a frame-by-frame basis, but preferably keyframes may be corrected and interpolation of the data performed to modify the intermediate frames.
  • the present invention allows the subject data to be modified after the movement data has been collected thereby enabling post-production modifications.
  • the virtual character 1 is shown in a third position in Figure 6. However, the measurements of the pelvis 7 and the lower spine used to generate the motion capture data and/or animation data are disproportionate.
  • the subject data file may be modified post-production to correct these errors.
  • the modified subject data is then used in combination with the collected movement data to generate revised motion capture data and/or animation data which may be used to animate the virtual character 1.
  • the corrected model of the virtual character 1 is shown in Figure 7.
  • FIG 8. A further example of the type of error that may arise due to an inaccuracy in the measurement of the subject is illustrated in Figure 8.
  • the virtual character 1 is shown in a crouched position with both hands 13, 15 on the ground but the feet 21 , 23 are shown as being lifted off of the ground 2. Again, the operator recognises that the subject's feet should be on the ground at this time. In the present case, the error is due at least in part to an inaccurate measurement of the subject's leg.
  • the operator can address this by modifying the subject data to alter the length of the legs 17, 19 to correspond more closely to the length of the subject's legs, thereby bringing the feet 21 , 23 closer to the ground, as shown in Figure 9.
  • the operator may also modify the collected movement data to scale the measured angular rotation of the subject's lower legs. By modifying the data to increase the angular displacement between the subject's respective upper and lower legs the feet 21 , 23 may be brought into contact with the ground.
  • the data collected from the inertial motion capture system is used to model the movements of the virtual character 1.
  • the resultant animation of the virtual character 1 may more closely reflect the subject's movements.
  • a jump vector parameter may be modified to alter the characteristics of a jump modelled by the virtual character 1.
  • the jump vector parameter may be applied to a set of root or core data to modify the position and/or orientation of the virtual character.
  • the jump vector parameter may be applied to the movement data collected from a core reference sensor and the position and/or orientation of the movement data for the remaining sensors determined based on the modified data for the core reference sensor.
  • the core reference sensor in the present embodiment is the hip sensor.
  • the jump vector is initially set as 47.406 to closely model the jump performed by the subject. However, by increasing the jump vector to 147.406 the height of the jump may be increased, as shown in Figure 11. A further increase in the jump vector to 247.406 yields a still further increase in the height of the jump, as shown in Figure 12.
  • the exact frame of a lift-off and landing of a jump may be specified by an operator. Conversely, the centre of weight of the virtual character 1 may be calculated and this used, optionally in combination with one or more other variables, to keep the virtual character's feet on the ground 2 if it is known that they should not perform a jump.
  • A. tracking device is provided on the subject to allow a reference point R to be tracked.
  • the tracking device is provided on the back of the subject's pelvis.
  • the tracking device in the present embodiment comprises an ultrasonic transmitter.
  • a signal transmitted by the transmitter is detected by a plurality of receivers and the location of the transmitter, and hence the reference point, may be calculated by known triangulation techniques.
  • An alternative to ultrasound for the tracking system is to use Near-Field Electromagnetic Ranging (NFER).
  • NFER Near-Field Electromagnetic Ranging
  • the system calculates the position of a corresponding reference point R' on the virtual character 1 based on the collected movement data.
  • the calculated reference point R' indicates the expected position of the tracking device based on the movement data.
  • an indication of the accuracy of the system may be obtained.
  • a large discrepancy in the positions of the measured reference point R and the calculated reference point R' is indicative of inaccuracies in the subject data and/or the collected movement data.
  • a close correlation in the position of the measured reference point R and the calculated reference point R' suggests a high degree of accuracy in the subject data and the collected movement data.
  • the virtual character 1 is shown at different stages in a running cycle in Figures 13 and 14. Accumulated errors in the collected movement data combined with possible inaccuracies in the subject data has resulted in a large discrepancy between the measured reference point R and the calculated reference point R' in the position illustrated in Figure 13. In contrast, there is a good correlation between the measured reference point R and the calculated reference point R' in the position illustrated in Figure 14 and this suggests that the accumulated errors in the collected movement data and errors in the subject data are relatively small.
  • the invention allows the introduction of an independent ultrasonic tracking apparatus that will detect subject's pelvis position in the area that the data collection session is performed, to be another element in the array that help produce more accurate resultant data.
  • the data from a positional ultrasonic tracking system detecting the position of the subject's pelvis in space can guide the operator in setting better values to correct the pelvis position in the resultant jump.
  • the measured reference point R may be used as a known reference point for calculating the relative positions of the inertial sensors.
  • the calculated reference point R' may be locked on the position of the measured reference point R to provide increased accuracy.
  • the present invention in addition to allowing inertial motion capture systems to compete in the business of expensive data collection sessions, at least in preferred embodiments the present invention, with the availability to manipulate the data post-production, can delineate and separate factors effecting resultant data into more detail inside an unlimited array of known factors effecting the resultant data and editable in post-production. These factors include: i) Scaling the data to alter the angular rotation of elements of the hardware to compensate for skin, muscle or clothing artefact that introduce error in resultant data that might be predictable and correctable with introduction of scaling of subject values.
  • ii) In addition to limb measurements and the scalable rate of motion of limbs, the particular angular relationship between adjoining limbs at the start of each set of collected data file must match the subject's representation of angular relationship between adjoining limbs.
  • Jump prediction constants like threshold and sensitivity, with can apply to values extracted from rate of change in the angle of knees and ankles which work to predict and calculate jumps in collected data.
  • Static or kinetic friction of the floor for example to differentiate between walking on ice and rubber).
  • iv) Outer measurements or thickness of limbs (determines sides, inner or outer collision of the limbs with the outside world).
  • Gravity and centre of weight can be adjusted for particular body shapes.
  • the generated motion capture data and/or animation data may then be used to animate a plurality of virtual characters.

Landscapes

  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Processing Or Creating Images (AREA)

Abstract

The present application relates to a method of generating motion capture data and/or animation data. The method includes the step of collecting movement data relating to a subject's movements whilst the subject performs one or more actions. A set of subject data is provided and, after the movement data has been collected, the subject data is modified. The modified subject data is then utilised to generate motion capture data and/or animation data. The step of modifying the subject data can allow the subject data to be tailored to relate to the subject. A modifier may be applied to a set of movement data collected from one or more core reference sensors. The present application also relates to a system for generating motion capture data and/or animation data.

Description

METHOD OF GENERATING MOTION CAPTURE DATA AND/OR ANIMATION DATA
FIELD OF THE INVENTION
The' present invention relates to a method of generating motion capture data and/or animation data. More particularly, the present invention relates to a method of generating motion capture data and/or animation data from data derived from an inertial motion measurement apparatus.
BACKGROUND TO THE INVENTION
There are a range of motion measurement techniques suitable for generating motion capture data. The data may be used as basis for computer animation, for example to be used in motion pictures or video games. One known technique utilises inertial sensors to measure the movements of a subject. The inertial sensors are typically provided on the limbs and torso of the subject and the measured data may be used to determine relative movement of the sensors. Utilising an appropriate physics model, for example to simulate friction and gravity, motion capture data relating to the movements of the subject may be generated from the measured data. Inertial motion measurement systems or inertial motion capture (mocap) systems are heavily dependent on two resources, namely: a) the presence and accuracy of the hardware employed; and b) the presence and accuracy of the representation of the subject's limb measurements which must predict the time of the collision of the same into worldly objects like floors, steps, walls, tables, seats, other people, etc. It is relatively straightforward to secure the best hardware performance possible.
However, the accuracy of resultant data relies largely on correct limb measurements of the subject, which often requires unduly long periods of testing and setup to determine the optimum values. This delay may disqualify inertial motion capture systems from more expensive projects where it is not possible for a production team to wait for long periods of time while motion capture operators use expertise as well as trial and error to determine the optimum limb measurements. Moreover, capturing movement data from two or more subjects make the session significantly more difficult since the problems associated with obtaining correct limb measurements is multiplied.
With prior art systems the user produces resultant data at the time of the actual capture session and stores the data in various computer data file formats. The finality of the resultant data requires the operators to have the best performing hardware and the best subject limb representation before they begin a data collection session.
The present invention at least in preferred embodiments attempts to overcome or ameliorate at least some of the problems associated with known motion capture techniques and systems.
SUMMARY OF THE INVENTION
Viewed from a first aspect, the present invention relates to a method of generating motion capture data and/or animation data, the method comprising the steps of: providing subject data; collecting movement data relating to a subject's movements whilst the subject performs one or more actions; after the movement data has been collected modifying the subject data; and utilising the modified subject data to generate the motion capture data and/or animation data.
The ability to modify the subject data after the movement data has been collected is particularly advantageous since the subject data may be adapted or modified to match the subject after the movement data has been collected. Prior art techniques require that the subject data must be matched to the subject before the movement data is collected and this may lead to unacceptable delays. In contrast, the present invention allows the subject data to be modified after the movement data has been collected, potentially avoiding delays which may otherwise occur as the subject data is matched to the subject. Thus, the present invention may provide a dynamic array data collection method. The method is preferably for use with inertial motion measurement apparatus. In particular, the movement data is preferably collected from inertial motion measurement apparatus. The inertial motion measurement apparatus may comprise at least one inertial sensor, such as a gyroscope, to collect said movement data. Typically, inertial sensors are provided on the subject's limbs on either side of a joint. A first inertial sensor may be provided on an arm between the shoulder and the elbow and a second inertial sensor may be provided between the elbow and the wrist. The data collected from said first and second inertial sensors may be used to calculate the position and orientation of the wrist. The subject data is preferably stored in a subject data file. The subject data may comprise data relating to the size and/or dimensions of said subject. The subject data may initially approximate the subject. The subject data may comprise data relating to the length and/or thickness of each limb of the subject. Moreover, the subject data may comprise data relating to mass and/or density to allow improved physics modelling.
The subject data may comprise data relating to the location of a centre of gravity of the subject. Alternatively, the location of the centre of gravity may be calculated from said subject data.
The subject data may be provided before the movement data is collected. For example, a default subject data file may be provided to allow an approximate rendering of the collected movement data to be displayed. Alternatively, the subject data may be generated after the movement data has been collected. For example, a subject data file may be created specifically relating to the subject after the movement data has been collected.
The collected movement data is preferably stored in a movement data file which is separate from said subject data file. To facilitate future modifications, the subject data file and/or the movement data file are preferably stored independently of the motion capture data and/or animation data.
The subject data and the collected movement data may be used to model a virtual character substantially in real time. The modelled virtual character may be displayed on a screen. A physics model may be used to model the virtual character based on the collected movement data. The generated motion capture data and/or animation data may be suitable for animating a virtual character. The virtual character may comprise one or more of the following: a torso, a head and at least one limb. The collected movement data typically corresponds to the respective features of the virtual character.
The step of modifying the subject data may comprise tailoring the subject data to relate to or correspond to the subject. The data may be modified to correspond to the respective length and/or thickness of the limbs of the subject. Equally, the subject data may be modified to correspond to the subject's torso. The subject may be measured using scanning apparatus, such as a laser scanner. Preferably, however, at least one photograph of the subject may be taken and the required dimensions determined from said at least one photograph. The at least one photograph typically includes a scaling device, such as a frame having known dimensions, to allow accurate measurements to be determined. Preferably, the subject data is derived from two or more photographs.
The motion capture data and/or animation data may be generated directly from the collected movement data in combination with the modified subject data. However, in addition to modifying the subject data, it may be necessary to edit or modify the collected movement data. The motion capture data and/or animation data may be generated from the modified subject data and the modified motion capture data and/or animation data. Preferably, an operator may modify the generated motion capture data and/or animation data and the collected movement data may be modified in response to the changes to the generated data. The collected movement data may, for example, be modified by performing a rotational and/or translational transform on a core reference point. The rest of the movement data may then be modified based on the transform applied to the core reference point.
A modifier may be applied to a set of root or core data to modify related or subsidiary movement data. For example, a modifier may be applied to the movement data collected from one or more root sensors. A plurality of sensors may form a chain or network and the root sensor may be provided at a base of the chain or at a branch in the network. The modifier may relate to the position and/or orientation of the collected movement data. The modified movement data for said root sensor(s) may alter the calculated relative position and/or orientation of subsidiary (linked) sensors. Applying the modifier to data set(s) relating to the root sensor(s) may alter the position and/or orientation of a portion of the virtual character, for example one or more limbs, or the entire virtual character. For example, applying the modifier to the movement data collected from an upper arm sensor would alter the position and/or orientation of the forearm and hand.
A jump vector modifier may be applied to the movement data collected from a core reference sensor and the position and/or orientation of the movement data for the remaining sensors determined based on the modified data for the core reference sensor. An artefact correction modifier may be applied to the core reference sensor to correct artefact errors.
The core reference sensor is preferably the hip or pelvis sensor. The editing of the collected movement data may be performed to match collision timing. The collected movement data may be edited to scale an angular rotational component of the collected movement data. In other words, a component of the collected movement data relating to angular rotation may be modified to increase or decrease the angular rotation. For example, the collected movement data may comprise an angular rotation of 10° for a joint but this may be scaled to a smaller rotation of 9.5° or to a larger rotation of 10.5°. The scaling may enable intermediate angular orientations to be adjusted to reflect the operator-specified changes. Equally, the collected movement data may be edited to dictate the respective frames for lift-off and landing of a jump. The collected movement data may be edited to match a measured start and/or end position(s) to a corresponding start and/or end position(s) of the subject for a particular action. For example, the motion capture data and/or animation data relating to the location and/or angular orientation of a limb may be edited to match the start and/or end position of a corresponding limb of the subject for a particular action.
The data collected from said at least one inertia) sensor is preferably also used to calculate the location and/or orientation of a reference point. The reference point may, for example, be a point on the torso. Preferably, the reference point corresponds to the subject's coccyx. The location and/or orientation of the reference point may be calculated using a physics model, for example utilising a friction coefficient and/or a jump coefficient. The reference point may correspond to the position of the root sensor.
The method preferably comprises the step of measuring the location and/or orientation of a reference point. The reference point is preferably located on the coccyx of the subject. The reference point may, for example, be tracked. A transmitter or receiver may be provided on the subject to define said reference point. The location of the transmitter or receiver may be tracked by a plurality of receivers or transmitters respectively. A radio or ultrasound signal may be used to measure the location and/or orientation of the reference point. Alternatively, a camera may film the subject and software may track a locator provided on the subject. The method preferably comprises the step of comparing the calculated location of said reference point to the measured location of said reference point. The accuracy of the collected movement data may be determined using this technique. For example, a large discrepancy between the calculated and measured positions may be indicative of a high level of inaccuracy in the collected data. Conversely, a close correlation between the calculated and measured positions may be indicative of a high level of accuracy in the collected data. The measured location and the calculated location are preferably simultaneously displayed on a screen.
The subject data and/or the movement data may be stored in at least one array. To modify the data, one or more transforms may be performed on part or all of the data stored in said at least one array. For example, a translational or rotational transform may be applied to part or all of the stored data. Furthermore, the at least one array may comprise operator-specified values to allow the generated motion capture data and/or animation data to be refined. The operator-specified values may include one or more of the following set: jump prediction constants; static and/or kinetic friction constants; acceleration and/or deceleration constants; and jump vectors. Of course, default values may be pre-programmed. The method may comprise the further step of calculating the centre of weight or the centre of gravity based on the subject data. The position of the centre of weight may be approximated using the dimensions of the subject, for example the length and/or thickness of limbs. An operator may modify the centre of weight. Viewed from a further aspect, the present invention relates to a method of generating motion capture data and/or animation data for a character, the method comprising the steps of: collecting movement data from" at least one inertial sensor provided on a subject while the subject performs an action; calculating the location of a reference point on the character based on the collected movement data; and measuring the location of a reference point on said subject. The calculated location of the character reference point may then be compared to the measured location of the subject reference point. This comparison may provide an indication of the accuracy of the method. Preferably, the calculated and measured reference points are displayed on a screen simultaneously with a rendering of the character. The position of the character reference point on the character preferably corresponds to the position of the subject reference point on the subject.
The location of the subject reference point is preferably measured directly. The subject reference point may be tracked while the subject performs said action. A tracking system may be used to track the position of a device located in a predetermined position on the body of the subject.
A transmitter may be provided on the subject and a signal transmitted by the transmitter received by a plurality of receivers and used to measure the location of the transmitter. Conversely, a receiver may be provided on the subject for receiving a signal transmitted from a plurality of transmitters. Triangulation may be used to determine the relative position of the reference point.
The method may comprise the further step of modifying the collected movement data to match the calculated location of the reference point to the measured location of the reference point. The reference point may be fixed to the measured location for the animation data. Thus, movement data may be calculated based on the data collected, from said at least one inertial sensors in combination with the measured location of said reference point. The core reference sensor may be provided at said reference point such that the position and/or orientation of the other sensors may be determined in relation to the measured position of the reference point. The measured location and the calculated location of said reference point are preferably simultaneously displayed on a screen. The step of modifying the collected movement data may comprise performing a rotational and/or translational transform on a selected portion of said data, for example corresponding to the character's limb, head, torso or pelvis. Alternatively, the rotational and/or translational transform may be applied to the complete data set. The motion capture data and/or the animation data is preferably generated from the collected movement data and subject data. The subject data is preferably modified to relate to said subject after the movement data has been collected. The animation data is preferably generated from the collected movement data and the modified subject data file. The subject data is preferably stored in a subject data file and the movement data stored in a separate movement data file.
The handling of the subject data and the processing of the collected movement data is preferably handled by a processor. The present invention also relates to a processor programmed to perform these processing steps. Furthermore, the present invention relates to a computer programme for operating a processor in accordance with these processing steps; or a carrier having such a computer programme stored thereon.
Viewed from a still further aspect, the present invention relates to a system for generating motion capture data and/or animation data, the system comprising a motion capture system for collecting movement data relating to a subject performing one or more actions; a subject data modifier for modifying subject data after the movement data has been collected; and a motion capture data and/or animation data generator for generating said motion capture data and/or animation data utilising the modified subject data and the collected movement data. The motion capture system is preferably an inertial motion capture system.
Viewed from a yet further aspect, the present invention relates to a system for generating motion capture data and/or animation data, the system comprising an inertial motion capture system for collecting movement data from at least one inertial sensor provided on a subject while the subject performs an action; a processor for calculating the location of a reference point on the character based on the collected movement data; a tracking system for tracking a reference point on said subject. Advantageously, the calculated location of the character reference point may be compared to the measured location of the subject reference point. The system may be provided with comparison means for comparing the location of the measured reference point to the location of the calculated reference point. A notification may issue if the comparison means determines that the discrepancy between the measured and calculated reference point positions exceeds a predetermined value. Viewed from a yet further aspect, the present invention relates to a method of generating motion capture data and/or animation data, the method comprising the steps of: providing a plurality of sensors on a subject, the sensors being arranged in a chain; collecting movement data from said sensors whilst the subject performs one or more actions; applying a modifier to the movement data collected for one or more sensors in said chain and then calculating the position and/or orientation of the remaining sensors in said chain; and utilising the modified movement data to generate the motion capture data and/or animation data.
The processes and systems described herein may be used simultaneously to generate motion capture data and/or animation data for one or more subjects.
BRIEF DESCRIPTION OF THE DRAWINGS
A preferred embodiment of the present invention will now be described, by way of example only, with reference to the accompanying Figures, in which:
Figure 1 shows a virtual character in a first position animated using movement data collected from an inertial motion capture system;
Figure 2 shows a workflow chart illustrating the process according to the present invention; Figure 3 shows the virtual character of Figure 1 animated using motion capture data and/or animation data generated using modified movement data in accordance with the present invention;
Figure 4 shows the virtual character in a second position animated using movement data collected from the inertial motion capture system; Figure 5 shows the virtual character of Figure 3 animated using motion capture data and/or animation data generated using modified movement data in accordance with the present invention;
Figure 6 shows the virtual character in a third position created from a subject data file and collected movement data; Figure 7 shows the virtual character of Figure 5 created from a modified data file and the collected movement data;
Figure 8 shows the virtual character in a fourth position created from a subject data file and collected movement data;
Figure 9 shows the virtual character of Figure 7 created from a modified data file and the collected movement data; Figures 10, 11 and 12 show the effect of modifying a jump vector on the animation of the virtual character;
Figure 13 shows the virtual character in a fourth position with a calculated reference point and a tracked reference point displayed; and Figure 14 shows the virtual character in a fifth position again with the calculated reference point and the tracked reference points displayed.
DETAILED DESCRIPTION
The present invention relates to a method of generating motion capture data and/or animation data based on movement data collected from an inertial motion capture system.
The inertial motion capture system generates data relating to the movements of a subject, such as an actor. By combining the collected movement data with subject data, the motion capture data and/or animation data may be generated. The resultant data may be used, for example, to animate a virtual character 1 in three dimensions.
The subject data comprises dimensions of the virtual character 1 , including the length and thickness of limbs. The subject data is stored in a dedicated file and may initially only approximate the subject. However, the method according to the present invention allows the subject data to be refined more closely to represent the subject after the movement data has been collected.
The virtual character 1 is displayed in relation to a three-dimensional coordinate system having X, Y and Z axes, as illustrated in Figure 1. For the sake of clarity, the virtual character 1 is represented by polygons in the Figures and the ground is represented by a horizontal line 2. The joints of the virtual character 1 are represented by a circle, the centre of the circle representing the pivot point of the joint. The virtual character 1 has a torso 3; a head 5; a pelvis 7; left and right arms 9, 11 ; left and right hands 13, 15; left and right legs 17, 19; and left and right feet 21, 23. It will be understood that more complicated character models may be implemented. The inertial motion capture system comprises a plurality of inertial sensors, such as gyroscopes, provided on the subject. The inertial sensors are provided on the subject's limbs, typically on each side of a joint. At least one inertial sensor is also provided on each of the torso and the head of the subject. A suitable inertial motion capture system is described in International Patent Application No. PCT/GB2007/001565, which is incorporated herein in its entirety by reference. The inertial sensors generate movement data and the collected movement data is used to calculate the relative movement of each sensor and, therefore, the relative movement of the subject's limbs, torso, head and so on. The movement data collected from the inertial sensors could be used directly to animate the virtual character 1. However, this approach may lead to inaccuracies which mean that the resultant animation is not acceptable. As shown in Figure 1 , the left and right hands 13, 15 of the virtual character 1 are below the level of the ground, represented by the line 2. If the data was used in this form, the hands 13, 15 of the virtual character 1 would seem to disappear into the ground 2 when the virtual character 1 reached this stage of the animation. The inaccuracies in the animation sequence of the virtual character 1 can be due to a variety of reasons. For example, there may be errors in the collected movement data, for example due to skin, muscle and clothing artefacts or due to hardware data collection errors. Further errors may be introduced due to inaccuracies in the subject's measurements which are typically used to create a subject data file for rendering the virtual character 1.
In prior art systems, these inaccuracies and errors must be identified before the movement data is collected. The method according to the present invention creates a new file format before the resultant motion capture data and/or animation data is generated. This technique, at least in preferred embodiments, enables the process of determining the optimum limb measurements of the subject to be postponed until after the data collection session has been completed. This way the data collected at the session can accept editions at a later time providing new edited resultant data.
A method of simplifying the subject's limb measurement process could, for example, involve photographing the subject at the beginning of the data collection session but tailoring the subject data to match the subject in post-production after the movement data has been collected based on the photpgraph(s) of the subject. The system can use anyone else's limb measurement for the time of the capture and ignore incorrect • representations of collisions between the limbs and the material world like floors and steps during the session and upon later editions remove the incorrect collisions. A workflow chart illustrating the process according to the present invention is shown in Figure 2. The movement data is collected from the inertial sensors provided on the subject, as represented by Step A. As described in more detail below, one or more tracking devices may be provided on the subject and the position of these devices is tracked. The collected movement data and the tracking data are stored in a dedicated file. As represented by Step B, the collected movement data is combined with the stored subject data to generate keyframe data for animating the virtual character 1. The keyframes typically represent specific events that affect the root position of the virtual character 1 , such as jumping with a specific velocity vector or changing contact points at a specific time.
An operator may then review the resultant animation and check for inaccuracies or errors which may have resulted for errors in the subject data file or the collected movement data file. As illustrated by Step C, the operator may modify the subject data file after the movement data has been collected and the modified subject data used to generate revised keyframe data. This process may be repeated until the keyframe data is acceptable. The operator may also modify the collected movement data to correct any inaccuracies or errors.
The modified subject data may then be combined with the collected movement data to generate motion capture data and/or animation data for each frame, as illustrated by Step D. The operator may edit the keyframes to refine the resultant motion capture data and/or animation data, as illustrated by Step E. The edited keyframes are then used to generate modified motion capture data and/or animation data for each frame. The step of editing the keyframes is an iterative process and may be repeated if required.
Once the operator is satisfied with the motion capture data and/or animation data, the generated data may be exported or recorded, as illustrated by Step F.
The method according to the present invention will now be described with reference to several examples. ,
As outlined above, the movement data collected from the inertial motion capture system may lead to inaccuracies such as the incorrect positioning of the virtual character 1 in relation to other objects such as the ground 2, as illustrated in Figure 1. The present invention allows the collected movement data to be modified post-production. Thus, any inaccuracies may be reduced or corrected after the motion capture session has finished. In the example shown in Figure 1 , the movement data may be modified to translate the virtual character 1 upwardly such that the hands 13, 15 contact the ground 2. However, this error is likely to result from inaccuracies in the subject data and the operator should modify the subject data such that the virtual character 1 is displayed with the hands 13, 15 in contact with the ground 2, as shown in Figure 3.
The operator may rely on their own judgement to identify any inaccuracies or errors in the resultant animation of the virtual character 1. Alternatively, the animation of the virtual character 1 can be compared to a video of the subject performing the actions to identify obvious defects. The virtual character 1 is shown in Figure 4 in a second position with the right foot 23 lifted off the ground 2 but the operator recognises that both feet 21 , 23 should be on the ground 2. This inaccuracy in the collected movement data may be the result of a clothing artefact or faulty hardware data, for example. In the present case, the error may be corrected by rotating the virtual character 1 about the Y axis such that the left and right feet 21 , 23 are both placed on the ground 2, as shown in Figure 5. A rotational transform may be applied to the collected movement data to provide the desired rotation about the Y axis. The animation may be corrected on a frame-by-frame basis, but preferably keyframes may be corrected and interpolation of the data performed to modify the intermediate frames.
If the measurements for the subject are not accurate this is likely to lead to inaccuracies in the motion capture data and/or animation data generated. The present invention allows the subject data to be modified after the movement data has been collected thereby enabling post-production modifications. The virtual character 1 is shown in a third position in Figure 6. However, the measurements of the pelvis 7 and the lower spine used to generate the motion capture data and/or animation data are disproportionate. The subject data file may be modified post-production to correct these errors. The modified subject data is then used in combination with the collected movement data to generate revised motion capture data and/or animation data which may be used to animate the virtual character 1. The corrected model of the virtual character 1 is shown in Figure 7.
A further example of the type of error that may arise due to an inaccuracy in the measurement of the subject is illustrated in Figure 8. The virtual character 1 is shown in a crouched position with both hands 13, 15 on the ground but the feet 21 , 23 are shown as being lifted off of the ground 2. Again, the operator recognises that the subject's feet should be on the ground at this time. In the present case, the error is due at least in part to an inaccurate measurement of the subject's leg. The operator can address this by modifying the subject data to alter the length of the legs 17, 19 to correspond more closely to the length of the subject's legs, thereby bringing the feet 21 , 23 closer to the ground, as shown in Figure 9. The operator may also modify the collected movement data to scale the measured angular rotation of the subject's lower legs. By modifying the data to increase the angular displacement between the subject's respective upper and lower legs the feet 21 , 23 may be brought into contact with the ground.
The data collected from the inertial motion capture system is used to model the movements of the virtual character 1. By altering parameters, the resultant animation of the virtual character 1 may more closely reflect the subject's movements. In certain cases, it may be desirable to alter parameters to reduce or exaggerate certain movements. For example, a jump vector parameter may be modified to alter the characteristics of a jump modelled by the virtual character 1. The jump vector parameter may be applied to a set of root or core data to modify the position and/or orientation of the virtual character. For example, the jump vector parameter may be applied to the movement data collected from a core reference sensor and the position and/or orientation of the movement data for the remaining sensors determined based on the modified data for the core reference sensor. The core reference sensor in the present embodiment is the hip sensor. A similar technique may be employed to correct artefact errors. As shown in Figure 10, the jump vector is initially set as 47.406 to closely model the jump performed by the subject. However, by increasing the jump vector to 147.406 the height of the jump may be increased, as shown in Figure 11. A further increase in the jump vector to 247.406 yields a still further increase in the height of the jump, as shown in Figure 12. To further refine the animation of the virtual character 1, the exact frame of a lift-off and landing of a jump may be specified by an operator. Conversely, the centre of weight of the virtual character 1 may be calculated and this used, optionally in combination with one or more other variables, to keep the virtual character's feet on the ground 2 if it is known that they should not perform a jump. The examples described above have outlined how an operator may compare the animation of the virtual character 1 to a video image of the subject performing the actions when the movement data was collected. Although the operator could equally rely on their own judgement when modifying the movement data, the present system offers a further technique for identifying inaccuracies in the collected data. A. tracking device is provided on the subject to allow a reference point R to be tracked. In the present embodiment, the tracking device is provided on the back of the subject's pelvis. The tracking device in the present embodiment comprises an ultrasonic transmitter. A signal transmitted by the transmitter is detected by a plurality of receivers and the location of the transmitter, and hence the reference point, may be calculated by known triangulation techniques. An alternative to ultrasound for the tracking system is to use Near-Field Electromagnetic Ranging (NFER). A suitable NFER tracking system is disclosed in US 2004/0032363 (Schantz et al.), which is incorporated herein in its entirety by reference.
In addition to measuring the position of the reference point on the subject using, the tracking device, the system calculates the position of a corresponding reference point R' on the virtual character 1 based on the collected movement data. The calculated reference point R' indicates the expected position of the tracking device based on the movement data.
By comparing the position of the measured reference point R and the calculated reference point R' an indication of the accuracy of the system may be obtained. A large discrepancy in the positions of the measured reference point R and the calculated reference point R' is indicative of inaccuracies in the subject data and/or the collected movement data. Conversely, a close correlation in the position of the measured reference point R and the calculated reference point R' suggests a high degree of accuracy in the subject data and the collected movement data. The virtual character 1 is shown at different stages in a running cycle in Figures 13 and 14. Accumulated errors in the collected movement data combined with possible inaccuracies in the subject data has resulted in a large discrepancy between the measured reference point R and the calculated reference point R' in the position illustrated in Figure 13. In contrast, there is a good correlation between the measured reference point R and the calculated reference point R' in the position illustrated in Figure 14 and this suggests that the accumulated errors in the collected movement data and errors in the subject data are relatively small.
In addition to availability of array of software values that can be edited in postprocessing of the collected data into resultant data, the invention allows the introduction of an independent ultrasonic tracking apparatus that will detect subject's pelvis position in the area that the data collection session is performed, to be another element in the array that help produce more accurate resultant data. In post-production, where the operator can have a choice of various values to correct, for example in correcting a jump in primary collected data, the data from a positional ultrasonic tracking system detecting the position of the subject's pelvis in space can guide the operator in setting better values to correct the pelvis position in the resultant jump.
The measured reference point R may be used as a known reference point for calculating the relative positions of the inertial sensors. In other words, the calculated reference point R' may be locked on the position of the measured reference point R to provide increased accuracy.
In addition to allowing inertial motion capture systems to compete in the business of expensive data collection sessions, at least in preferred embodiments the present invention, with the availability to manipulate the data post-production, can delineate and separate factors effecting resultant data into more detail inside an unlimited array of known factors effecting the resultant data and editable in post-production. These factors include: i) Scaling the data to alter the angular rotation of elements of the hardware to compensate for skin, muscle or clothing artefact that introduce error in resultant data that might be predictable and correctable with introduction of scaling of subject values. ii) In addition to limb measurements and the scalable rate of motion of limbs, the particular angular relationship between adjoining limbs at the start of each set of collected data file must match the subject's representation of angular relationship between adjoining limbs. ii) Jump prediction constants, like threshold and sensitivity, with can apply to values extracted from rate of change in the angle of knees and ankles which work to predict and calculate jumps in collected data. iii) Static or kinetic friction of the floor (for example to differentiate between walking on ice and rubber). iv) Outer measurements or thickness of limbs (determines sides, inner or outer collision of the limbs with the outside world). v) Gravity and centre of weight can be adjusted for particular body shapes. vi) Lift-off or landing frames of jumps that can be mistaken by the automatic software calculations which find a chance to be corrected at post-production. vii) Rotation of particular hardware data can correct the resultant data where skin, muscle, clothing artefacts, as well as undetected malfunctioning hardware data, have helped to introduce errors during data collection session. viii) New jump vectors can be introduced to the centre of weight of the subject at the frame of the operator's choosing to produce resultant data more closely resembling the actual action performed at data collection session.
It will be appreciated that various changes and modifications may be made to the system described herein without departing from the spirit and scope of the present invention. For example, although the system has been illustrated with movement data collected from one subject, it may be collected simultaneously from two or more subjects.
The generated motion capture data and/or animation data may then be used to animate a plurality of virtual characters.

Claims

CLAIWlS:
1. A method of generating motion capture data and/or animation data, the method comprising the steps of: providing subject data; collecting movement data relating to a subject's movements whilst the subject performs one or more actions; after the movement data has been collected modifying the subject data; and utilising the modified subject data to generate the motion capture data and/or animation data.
2. A method of generating motion capture data and/or animation data as claimed in claim 1 , wherein the step of modifying the subject data comprises tailoring the subject data to relate to the subject.
3. A method of generating motion capture data and/or animation data as claimed in claim 1 or claim 2, wherein the subject data comprises data relating to the dimensions of the subject.
4. A method of generating motion capture data and/or animation data as claimed in claim 3, wherein the subject data comprises data relating to the length and/or thickness of at least one limb of the subject.
5. A method of generating motion capture data and/or animation data as claimed in any one of the preceding claims, wherein a modifier is applied to a set of core data to modify related data.
6. A method of generating motion capture data and/or animation data as claimed in claim 5, wherein the core data is movement data collected from one or more core reference sensors.
7. A method of generating motion capture data and/or animation data as claimed in any one of the preceding claims further comprising the step of scaling an angular rotation component of the collected movement data.
8. A method of generating motion capture data and/or animation data as claimed in any one of the preceding claims, wherein at least one inertial sensor is provided on the subject to collect said movement data relating to the subject's movements.
9. A method of generating motion capture data and/or animation data as claimed in claim 8 further comprising the step of calculating the location of a reference point based on the movement data collected from said at least one inertial sensor.
10. A method of generating motion capture data and/or animation data as claimed in any one of the preceding claims further comprising the step of measuring the location of a reference point on the subject.
11. A method of generating motion capture data and/or animation data as claimed in claim 9 and claim 10 further comprising the step of comparing the calculated location of said reference point to the measured location of said reference point.
12. A method of generating motion capture data and/or animation data as claimed in claim 11, wherein the measured location and the calculated location are simultaneously displayed on a screen.
13. A method of generating motion capture data and/or animation data as claimed in any one of the preceding claims further comprising the step of calculating a centre of weight based on the subject data.
14. A method of generating motion capture data and/or animation data for a character, the method comprising the steps of: collecting movement data from at least one inertial sensor provided on a subject while the subject performs an action; calculating the location of a reference point on the character based on the collected movement data; and measuring the location of said reference point on said subject.
15. A method of generating motion capture data and/or animation data as claimed in claim 14, further comprising the step of modifying the collected movement data to match the calculated location of the reference point to the measured location of the reference point.
16. A method of generating motion capture data and/or animation data as claimed in claim 15, wherein the step of modifying the collected movement data comprises performing a rotational and/or translational transform on said data.
17. A method of generating motion capture data and/or animation data as claimed in any one of claims 14, 15 or 16, wherein the measured location and the calculated location of said reference point are simultaneously displayed on a screen.
18. A method of generating motion capture data and/or animation data as claimed in any one of claims 14 to 17, wherein the motion capture data and/or animation data is generated from the collected movement data and a subject data file.
19. A method of generating motion capture data and/or animation data as claimed in claim 18, wherein the subject data file is modified to relate to said subject after the movement data has been collected, the animation data being generated from the collected movement data and the modified subject data file.
20. A system for generating motion capture data and/or animation data, the system comprising a motion capture system for collecting movement data relating to a subject performing one or more actions; a subject data modifier for modifying subject data after the movement data has been collected; and a motion capture data and/or animation data generator for generating said motion capture data and/or animation data utilising the modified subject data and the collected movement data.
21. A system for generating motion capture data and/or animation data, the system comprising an inertial motion capture system for collecting movement data from at least one inertial sensor provided on a subject while the subject performs an action; a processor for calculating the location of a reference point on the character based on the collected movement data; a tracking system for tracking a reference point on said subject.
PCT/GB2009/001636 2008-07-04 2009-06-30 Method of generating motion capture data and/or animation data WO2010001109A2 (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
GB0812322A GB0812322D0 (en) 2008-07-04 2008-07-04 Method of generating motion capture data and/or animation data
GB0812322.6 2008-07-04

Publications (2)

Publication Number Publication Date
WO2010001109A2 true WO2010001109A2 (en) 2010-01-07
WO2010001109A3 WO2010001109A3 (en) 2010-03-04

Family

ID=39718025

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/GB2009/001636 WO2010001109A2 (en) 2008-07-04 2009-06-30 Method of generating motion capture data and/or animation data

Country Status (2)

Country Link
GB (1) GB0812322D0 (en)
WO (1) WO2010001109A2 (en)

Cited By (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109388142A (en) * 2015-04-30 2019-02-26 广东虚拟现实科技有限公司 A kind of method and system carrying out virtual reality travelling control based on inertial sensor
US10973440B1 (en) * 2014-10-26 2021-04-13 David Martin Mobile control using gait velocity
CN114562993A (en) * 2022-02-28 2022-05-31 联想(北京)有限公司 Track processing method and device and electronic equipment
US20230029894A1 (en) * 2021-07-30 2023-02-02 Sony Interactive Entertainment LLC Sharing movement data
US11726553B2 (en) 2021-07-20 2023-08-15 Sony Interactive Entertainment LLC Movement-based navigation

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5852450A (en) * 1996-07-11 1998-12-22 Lamb & Company, Inc. Method and apparatus for processing captured motion data
US5990908A (en) * 1997-09-22 1999-11-23 Lamb & Company Method and apparatus for processing full motion computer animation
US20040143176A1 (en) * 1998-04-17 2004-07-22 Massachusetts Institute Of Technology, A Massachusetts Corporation Motion tracking system
EP1593931A1 (en) * 2003-02-14 2005-11-09 Akebono Brake Industry Co., Ltd. Difference correcting method for posture determining instrument and motion measuring instrument
WO2007058526A1 (en) * 2005-11-16 2007-05-24 Xsens Technologies B.V. Motion tracking system
US20080091373A1 (en) * 2006-07-31 2008-04-17 University Of New Brunswick Method for calibrating sensor positions in a human movement measurement and analysis system

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5852450A (en) * 1996-07-11 1998-12-22 Lamb & Company, Inc. Method and apparatus for processing captured motion data
US5990908A (en) * 1997-09-22 1999-11-23 Lamb & Company Method and apparatus for processing full motion computer animation
US20040143176A1 (en) * 1998-04-17 2004-07-22 Massachusetts Institute Of Technology, A Massachusetts Corporation Motion tracking system
EP1593931A1 (en) * 2003-02-14 2005-11-09 Akebono Brake Industry Co., Ltd. Difference correcting method for posture determining instrument and motion measuring instrument
WO2007058526A1 (en) * 2005-11-16 2007-05-24 Xsens Technologies B.V. Motion tracking system
US20080091373A1 (en) * 2006-07-31 2008-04-17 University Of New Brunswick Method for calibrating sensor positions in a human movement measurement and analysis system

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
VLASIC D ET AL: "Practical motion capture in everyday surroundings" ACM TRANSACTIONS ON GRAPHICS, ACM, US, vol. 26, no. 3, 29 July 2007 (2007-07-29), pages 35/1-35/10, XP007910935 ISSN: 0730-0301 *

Cited By (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US10973440B1 (en) * 2014-10-26 2021-04-13 David Martin Mobile control using gait velocity
CN109388142A (en) * 2015-04-30 2019-02-26 广东虚拟现实科技有限公司 A kind of method and system carrying out virtual reality travelling control based on inertial sensor
US11726553B2 (en) 2021-07-20 2023-08-15 Sony Interactive Entertainment LLC Movement-based navigation
US20230029894A1 (en) * 2021-07-30 2023-02-02 Sony Interactive Entertainment LLC Sharing movement data
US11786816B2 (en) * 2021-07-30 2023-10-17 Sony Interactive Entertainment LLC Sharing movement data
CN114562993A (en) * 2022-02-28 2022-05-31 联想(北京)有限公司 Track processing method and device and electronic equipment

Also Published As

Publication number Publication date
GB0812322D0 (en) 2008-08-13
WO2010001109A3 (en) 2010-03-04

Similar Documents

Publication Publication Date Title
KR101812379B1 (en) Method and apparatus for estimating a pose
US9401025B2 (en) Visual and physical motion sensing for three-dimensional motion capture
KR101591779B1 (en) Apparatus and method for generating skeleton model using motion data and image data
Ahmadi et al. 3D human gait reconstruction and monitoring using body-worn inertial sensors and kinematic modeling
US10648883B2 (en) Virtual testing model for use in simulated aerodynamic testing
Cha et al. Analysis of climbing postures and movements in sport climbing for realistic 3D climbing animations
CN109284006B (en) Human motion capturing device and method
WO2010001109A2 (en) Method of generating motion capture data and/or animation data
US20150002518A1 (en) Image generating apparatus
JP2015186531A (en) Action information processing device and program
US20130069939A1 (en) Character image processing apparatus and method for footskate cleanup in real time animation
CN112515661B (en) Posture capturing method and device, electronic equipment and storage medium
CN112022202A (en) Techniques for determining ultrasound probe motion
US11145102B2 (en) Using a handheld device to recreate a human pose or align an object in an augmented reality or virtual reality environment
JP7318814B2 (en) DATA GENERATION METHOD, DATA GENERATION PROGRAM AND INFORMATION PROCESSING DEVICE
KR101962045B1 (en) Apparatus and method for testing 3-dimensional position
KR101527792B1 (en) Method and apparatus for modeling interactive character
JP2021083562A (en) Information processing device, calculation method, and program
Hanaizumi et al. A Method for Measuring Three-Dimensional Human Joint Movements in Walking
Tiesel et al. A mobile low-cost motion capture system based on accelerometers
KR101032509B1 (en) System and method of estimating real-time foot motion using kalman filter
JP7216222B2 (en) Information processing device, control method for information processing device, and program
JP4667900B2 (en) 3D analysis method from 2D image and system to execute it
JP3606308B2 (en) Three-dimensional structure acquisition method, recording medium, and apparatus
JP6643188B2 (en) Locomotion analysis device, system, and program

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 09772789

Country of ref document: EP

Kind code of ref document: A2

NENP Non-entry into the national phase in:

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 09772789

Country of ref document: EP

Kind code of ref document: A2