[go: up one dir, main page]
More Web Proxy on the site http://driver.im/

CN101441776B - Three-dimensional human body motion editing method driven by demonstration show based on speedup sensor - Google Patents

Three-dimensional human body motion editing method driven by demonstration show based on speedup sensor Download PDF

Info

Publication number
CN101441776B
CN101441776B CN2008101625955A CN200810162595A CN101441776B CN 101441776 B CN101441776 B CN 101441776B CN 2008101625955 A CN2008101625955 A CN 2008101625955A CN 200810162595 A CN200810162595 A CN 200810162595A CN 101441776 B CN101441776 B CN 101441776B
Authority
CN
China
Prior art keywords
action
training sample
sample
acceleration
demonstration performance
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Expired - Fee Related
Application number
CN2008101625955A
Other languages
Chinese (zh)
Other versions
CN101441776A (en
Inventor
耿卫东
梁秀波
李启雷
张翔
张顺
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Zhejiang University ZJU
Original Assignee
Zhejiang University ZJU
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Zhejiang University ZJU filed Critical Zhejiang University ZJU
Priority to CN2008101625955A priority Critical patent/CN101441776B/en
Publication of CN101441776A publication Critical patent/CN101441776A/en
Application granted granted Critical
Publication of CN101441776B publication Critical patent/CN101441776B/en
Expired - Fee Related legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Landscapes

  • Processing Or Creating Images (AREA)

Abstract

The invention discloses a method for arranging three-dimensional human body movements, which is driven by a demonstration performance and is based on an acceleration sensor. The method comprises the following steps: 1) segmenting movement capturing data and calculating the data segments to obtain acceleration data of an appointed part; 2) generating a training sample set automatically through adding noises; 3) preprocessing the training sample; 4) training a hidden Markov model; 5) connecting the acceleration sensor and placing the acceleration sensor at an appointed part on a human body; 6) performing demonstration movements by a user; 7) preprocessing a demonstration performance sample; 8) identifying the demonstration performance movements by using the hidden Markov model; and 9) editing and modifying the identified movements to obtain a result movement sequence. The method achieves the function of performing movement identification and movement arrangement by using the acceleration sensor, solves the problem that the prior movement identifying method needs to collect a large amount of training samples, and provides a method driven by demonstration performance to perform the movement identification and the movement arrangement on an acceleration space.

Description

The 3 D human body choreography method that demonstration performance drives based on acceleration sensor
Technical field
The present invention relates to 3 D human body action recognition and choreography method, especially a kind of 3 D human body choreography method of putting on a demonstration of driving based on acceleration sensor.
Background technology
The choreography that demonstration performance drives is to grow up on the basis of the existing work in a plurality of fields, and these work comprise cartoon making, the minimum gauge point of use or the motion capture of sensor and the action recognition that the user has nothing to do that doll drives.
The cartoon making that doll drives is the animation method that a kind of effective demonstration performance drives, and the action with the performing artist that it can be real-time is converted to computer virtual role's action.This method provides a kind of on-the-spot demonstration and visualization tool for traditional Film Animation, and its technical essential is: under the target roles situation different with ratio with performing artist's size, how effectively with performing artist's action mapping to target roles.But this method needs special input equipment, and this natural control mode can be directly delivered to virtual role with performing artist's individual character, therefore the control computer doll apparatus requires long practice and professional knowledge usually, has only the expert of minority effectively to grasp.
Motion capture to true man is a kind of animation method of demonstration performance driving of alternative doll, but motion capture equipment is very heavy, costs an arm and a leg, and use is very loaded down with trivial details.Therefore many researchers begin to explore how only to carry out quick and effective action estimation approach with a small amount of gauge point or sensor, can be divided into according to the difference of input equipment: based on method and sensor-based method of vision.The general technology route of these class methods is: utilize a small amount of camera, gauge point or sensor to obtain inadequate movable information, retrieval or action recognition are made in enterprising action in jumbo motion capture data storehouse, obtain high-quality action segment, connect and the action hybrid algorithm by means of action, obtain final continuous human action data.
Have multiple machine learning method to can be used for action recognition, Hidden Markov Model (HMM) is a kind of mode identification method of being used widely, and is particularly suitable for the time series with change in time and space is carried out modeling.From practical angle, a shortcoming of conventional machines learning method is: the great amount of samples that needs to collect a plurality of different users is used for training.Therefore, in fields such as man-machine interactions, the recognizer that the user has nothing to do has caused extensive studies interest, and its gordian technique is: how to generate the enough wide training sample of coverage automatically by computing machine.
Motion sensor is a kind of novel device that can catch many-sided kinetic characteristic, and along with micro electronmechanical (MEMS) development of technology, its volume and price reduce greatly, can be used for fields such as man-machine interaction and capturing movement.Acceleration transducer is a kind of motion sensor of measuring acceleration, and it is cheap, is very suitable for the cartoon making that daily interactive application and demonstration performance drive.Based on acceleration sensor, we have proposed a kind of 3 D human body choreography method of putting on a demonstration of driving.
Summary of the invention
The purpose of this invention is to provide a kind of 3 D human body choreography method of putting on a demonstration of driving based on acceleration sensor.
Comprise the steps:
1) to the motion capture data segmentation, the segment data that is only contained single action, read the positional information of skeleton node the motion capture data after segmentation, obtain and quicken the positional information of sensing at the human body institute corresponding virtual position of rest, ask second derivative to obtain acceleration value in its world coordinates to it, this numerical value is coupled with the overall situation rotation of multiply by corresponding node after the acceleration of gravity again on vertical component contrary rotation obtains the acceleration value in the sensor local coordinate;
2) be original sample with above-mentioned acceleration information, generate a series of training samples by adding noise;
3) above-mentioned training sample is carried out pre-service, obtain standardized training sample;
4) read in the initial parameter of Hidden Markov Model (HMM), with standardized training sample training Hidden Markov Model (HMM);
5) connect acceleration sensor and it is placed into the appointed part of user's body;
6) user adorns oneself with acceleration sensor and makes the demonstration performance action, controls the starting and ending of demonstration performance action by a button;
7) above-mentioned demonstration performance sample action is carried out pre-service, obtain standardized demonstration performance sample;
8) with the Hidden Markov Model (HMM) identification step 7 that trains in the step 4)) output the demonstration performance sample;
9) edit-modify is carried out in the action that identifies and obtain the result action sequence, the connection mixing that the content of edit-modify comprises the exaggeration editor that the time of action is adjusted, moves and moves.
Described is original sample with above-mentioned acceleration information, generate a series of training sample steps by adding noise: adopt two kinds of noise generation methods of even distribution noise and Gaussian distribution noise, controlled variable is the ratio of the variance of the variance of signal and noise, adopt the ratio of the variance of the variance of different signals and noise to generate training sample and carry out a series of quantitative experiments, find that ratio value when the variance of the variance of signal and noise is 4 between 5 the time, the action recognition rate is the highest.
Described above-mentioned training sample is carried out pre-service, obtain standardized training sample step: adopt the method for " spline interpolation-resampling " that training sample length is carried out normalization, adopt stochastic variable standardization formula that sample magnitude is carried out normalization, adopt linear pivot analysis algorithm that training sample is carried out feature extraction.
Describedly edit-modify is carried out in the action that identifies obtain the result action sequence, the content of edit-modify comprises exaggeration editor that the time of action is adjusted, moves and the connection blend step that moves: on the acceleration space, utilize the dynamic time adjustment algorithm to find the optimum matching path of demonstration performance sample and the sample action that identifies, and then by increasing or deleting the time attribute that some action action frame is adjusted the action that is identified; From demonstration performance action, extract the dynamics parameter, ask first order derivative to obtain its velocity information, multiply by velocity information after velocity information obtains exaggerating, the action after can obtaining exaggerating to its integration again with the dynamics parameter to the positional information of appointed part; The action that will identify at last is connected mixing with the last action that identifies, and generates result action sequence and output.
Target of the present invention provides a kind of method of directly perceived, natural expression cartoon making personnel creation intention, realizes the layout of 3 D human body action data efficiently.Animation method apparatus expensive at traditional demonstration performance driving, the problem that is difficult to use, we utilize cheap acceleration transducer, proposed a kind of simple and practical choreography method, the user only need degree of will speed up sensor be placed into the assigned address of health and performs to move and gets final product.Need the user to gather the shortcoming of a large amount of training samples at traditional machine learning method, our method can generate all training samples automatically from motion capture data, the action recognition ability of system can be expanded by add new motion capture data in action database.After the demonstration performance that identifies the user, the motion capture data that time attribute during system can also move according to demonstration performance and dynamics attribute obtain identification is made further adjustment.The inventive method does not have particular restriction for the user, even the beginner who lacks experience also can be familiar with and suitable native system in a short period of time, the invention solves the problem that animation method that demonstration performance drives has only the veteran expert of minority just can use, effectively saved manpower and financial resources for the cartoon making industry.
Description of drawings
The configuration signal of Fig. 1 acceleration sensor on human body;
This choreography of Fig. 2 system framework and flow process signal;
Fig. 3 (a) motion capture data fragment example;
The acceleration samples that Fig. 3 (b) obtains from the motion capture data computation;
Fig. 3 (c) is by adding the sample that equally distributed noise generates;
The sample that Fig. 3 (d) generates by the noise that adds Gaussian distribution;
Fig. 3 (e) puts on a demonstration of action;
The sample that Fig. 3 (f) obtains the demonstration performance sampling;
Fig. 4 (a) is through pretreated training sample;
Fig. 4 (b) is through pretreated demonstration performance sample;
The training of Fig. 5 Hidden Markov Model (HMM) and identification signal;
Fig. 6 dynamic time adjustment signal;
Fig. 7 (a) punching action;
Fig. 7 (b) is by the action of the reactivity after hitting;
Punching action after Fig. 7 (c) exaggeration;
Reactivity action after Fig. 7 (d) exaggeration;
Fig. 8 (a) choreography example above the waist;
The choreography example of Fig. 8 (b) lower part of the body.
Embodiment
The 3 D human body choreography method (see figure 2) based on acceleration sensor that demonstration performance drives comprises the steps:
1) to the motion capture data segmentation, the segment data that is only contained single action, read the positional information of skeleton node the motion capture data after segmentation, obtain and quicken the positional information of sensing at the human body institute corresponding virtual position of rest, ask second derivative to obtain acceleration value in its world coordinates to it, this numerical value is coupled with the overall situation rotation of multiply by corresponding node after the acceleration of gravity again on vertical component contrary rotation obtains the acceleration value in the sensor local coordinate;
2) be original sample with above-mentioned acceleration information, generate a series of training samples by adding noise;
3) above-mentioned training sample is carried out pre-service, obtain standardized training sample;
4) read in the initial parameter of Hidden Markov Model (HMM), with standardized training sample training Hidden Markov Model (HMM);
5) connect acceleration sensor and it is placed into the appointed part (see figure 1) of user's body;
6) user adorns oneself with acceleration sensor and makes the demonstration performance action, controls the starting and ending of demonstration performance action by a button;
7) above-mentioned demonstration performance sample action is carried out pre-service, obtain standardized demonstration performance sample;
8) with the Hidden Markov Model (HMM) identification step 7 that trains in the step 4)) output the demonstration performance sample;
9) edit-modify is carried out in the action that identifies and obtain the result action sequence, the connection mixing that the content of edit-modify comprises the exaggeration editor that the time of action is adjusted, moves and moves.
Described is original sample with above-mentioned acceleration information, generate a series of training sample steps by adding noise: adopt two kinds of noise generation methods of even distribution noise and Gaussian distribution noise, controlled variable is the ratio of the variance of the variance of signal and noise, adopt the ratio of the variance of the variance of different signals and noise to generate training sample and carry out a series of quantitative experiments, find that ratio value when the variance of the variance of signal and noise is 4 between 5 the time, the action recognition rate is the highest.
Described above-mentioned training sample is carried out pre-service, obtain standardized training sample step: when different users does same action, speed differs greatly, the acceleration sampled data length that obtains is uneven, adopts the method for " cubic spline interpolation-resampling " that training sample length is carried out normalization; When different users did same action, dynamics differed greatly, and the acceleration sampled data amplitude difference that obtains is very big, adopted that stochastic variable standardization formula carries out normalization to sample magnitude in the theory of probability; The principal character that sample action implied is submerged in the incoherent noise, adopts linear pivot analysis algorithm that training sample is carried out feature extraction, just is used for training and identification through the training sample after the pre-service.
Describedly edit-modify is carried out in the action that identifies obtain the result action sequence, the content of edit-modify comprises exaggeration editor that the time of action is adjusted, moves and the connection blend step that moves: on the acceleration space, utilize the dynamic time adjustment algorithm to find the optimum matching path of demonstration performance sample and the sample action that identifies, and then by increasing or deleting the time attribute that some action action frame is adjusted the action that is identified; From demonstration performance action, extract the dynamics parameter, ask first order derivative to obtain its velocity information, multiply by velocity information after velocity information obtains exaggerating, the action after can obtaining exaggerating to its integration again with the dynamics parameter to the positional information of appointed part; The action that will identify at last is connected mixing with the last action that identifies, and generates result action sequence and output.
With an example embodiment is described below:
At first each the action fragment in the motion capture data storehouse is calculated the acceleration information of corresponding body part as initial sample, Fig. 3 (a) is a motion capture data fragment example, and Fig. 3 (b) is the acceleration samples that calculates from this motion capture data.Then, system generates a series of training samples automatically by adding noise, has two kinds of noise generation methods available: the noise that evenly distributes, see Fig. 3 (c), and the Gaussian distribution noise is seen Fig. 3 (d).Training sample set is carried out standardization and feature extraction, and Fig. 4 (a) is through a training sample example after the standardization.Next step, with training Hidden Markov Model (HMM) through the sample set after the pre-service, training process is seen Fig. 5.Connect acceleration sensor and it is placed into the appointed part of human body, Fig. 1 has showed the rest of acceleration sensor on human body.Then, the user adorns oneself with acceleration sensor and makes the demonstration performance action, control the starting and ending that demonstration performance is moved by a button, obtain the acceleration sampling, Fig. 3 (e), Fig. 3 (f) are a demonstration performance action and the sample that the acceleration sensor sampling obtains.Before identification demonstration performance action, also need it is carried out standardization and feature extraction, Fig. 4 (b) is through one after standardization demonstration performance action example.With the Hidden Markov Model (HMM) identification demonstration performance action that trains, its process is seen Fig. 5.At last, according to the time attribute of demonstration performance action the dynamic time adjustment is carried out in the action that identifies, Fig. 6 is that dynamic time is adjusted example, and wherein, action data is illustrated with one one dimension curve; Dynamics according to demonstration performance action is exaggerated motion editing to the action that identifies, and Fig. 7 (a)-(d) has showed the effect of exaggeration is moved in punching action and reaction action thereof respectively.The 3 D human body choreography example that Fig. 8 (a) and Fig. 8 (b) have then provided respectively above the waist and lower part of the body demonstration performance drives based on acceleration sensor.

Claims (1)

1. the 3 D human body choreography method based on acceleration sensor of putting on a demonstration of driving is characterized in that comprising the steps:
1) to motion capture data segmentation in the motion capture data storehouse, the segment data that is only contained single action, read the positional information of skeleton node the motion capture data after segmentation, obtain and the positional information of acceleration sensor at the human body institute corresponding virtual position of rest, ask second derivative to obtain acceleration value in its world coordinates to it, acceleration value is coupled with the contrary acceleration value that obtains in the acceleration sensor local coordinate of rotating that multiply by the overall situation rotation of corresponding node after the acceleration of gravity again on vertical component in this world coordinates;
2) be original sample with the acceleration value in the above-mentioned local coordinate, generate a series of training samples by adding noise;
3) above-mentioned training sample is carried out pre-service, obtain standardized training sample;
4) read in the initial parameter of Hidden Markov Model (HMM), with standardized training sample training Hidden Markov Model (HMM);
5) connect acceleration sensor and it is placed into the appointed part of user's body;
6) user adorns oneself with acceleration sensor and makes the demonstration performance action, controls the starting and ending of demonstration performance action by a button;
7) sample to above-mentioned demonstration performance action carries out pre-service, obtains standardized demonstration performance sample;
8) with the Hidden Markov Model (HMM) identification step 7 that trains in the step 4)) output standardized demonstration performance sample;
9) edit-modify is carried out in the action that identifies and obtain the result action sequence, the connection mixing that the content of edit-modify comprises the exaggeration editor that the time of action is adjusted, moves and moves;
Described is original sample with the acceleration value in the above-mentioned local coordinate, generate a series of training sample steps by adding noise: adopt two kinds of noise generation methods of even distribution noise or Gaussian distribution noise, controlled variable is the ratio of the variance of the variance of signal and noise, adopt different described controlled variable to generate training sample and carry out a series of quantitative experiments, when the ratio value of the variance of the variance of signal and noise is 4 between 5 the time, the action recognition rate is the highest;
Described above-mentioned training sample is carried out pre-service, obtain standardized training sample step: adopt the method for " spline interpolation-resampling " that training sample length is carried out normalization, adopt stochastic variable standardization formula that the training sample amplitude is carried out normalization, adopt linear pivot analysis algorithm that training sample is carried out feature extraction.
CN2008101625955A 2008-12-04 2008-12-04 Three-dimensional human body motion editing method driven by demonstration show based on speedup sensor Expired - Fee Related CN101441776B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN2008101625955A CN101441776B (en) 2008-12-04 2008-12-04 Three-dimensional human body motion editing method driven by demonstration show based on speedup sensor

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN2008101625955A CN101441776B (en) 2008-12-04 2008-12-04 Three-dimensional human body motion editing method driven by demonstration show based on speedup sensor

Publications (2)

Publication Number Publication Date
CN101441776A CN101441776A (en) 2009-05-27
CN101441776B true CN101441776B (en) 2010-12-29

Family

ID=40726195

Family Applications (1)

Application Number Title Priority Date Filing Date
CN2008101625955A Expired - Fee Related CN101441776B (en) 2008-12-04 2008-12-04 Three-dimensional human body motion editing method driven by demonstration show based on speedup sensor

Country Status (1)

Country Link
CN (1) CN101441776B (en)

Families Citing this family (16)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US8947441B2 (en) 2009-06-05 2015-02-03 Disney Enterprises, Inc. System and method for database driven action capture
CN101615301B (en) * 2009-07-29 2013-03-27 腾讯科技(深圳)有限公司 Path control method and system for target in computer virtual environment
CN101989076A (en) * 2010-08-24 2011-03-23 北京水晶石数字科技有限公司 Method for controlling shooting by three-dimensional software
CN101995835B (en) * 2010-08-24 2012-06-27 北京水晶石数字科技股份有限公司 System for controlling performance by three-dimensional software
CN101989079A (en) * 2010-08-24 2011-03-23 北京水晶石数字科技有限公司 System for controlling photography by three-dimensional software
CN101976330B (en) * 2010-09-26 2013-08-07 中国科学院深圳先进技术研究院 Gesture recognition method and system
CN101976451B (en) * 2010-11-03 2012-10-03 北京航空航天大学 Motion control and animation generation method based on acceleration transducer
CN102722312B (en) * 2011-12-16 2015-12-16 江南大学 A kind of action trend prediction method of interaction experience based on pressure transducer and system
CN104463090A (en) * 2013-11-25 2015-03-25 安徽寰智信息科技股份有限公司 Method for recognizing actions of human body skeleton of man-machine interactive system
WO2016123648A1 (en) * 2015-02-02 2016-08-11 Guided Knowledge Ip Pty Ltd Frameworks, devices and methodologies configured to enable delivery of interactive skills training content, including content with multiple selectable expert knowledge variations
US10918924B2 (en) 2015-02-02 2021-02-16 RLT IP Ltd. Frameworks, devices and methodologies configured to enable delivery of interactive skills training content, including content with multiple selectable expert knowledge variations
JP6836582B2 (en) 2015-05-08 2021-03-03 ジーエヌ アイピー ピーティーワイ リミテッド Frameworks, devices and methodologies configured to enable automated classification and / or retrieval of media data based on user performance attributes obtained from the performance sensor unit.
CN109074752A (en) 2015-12-10 2018-12-21 Gn股份有限公司 It is configured as realizing the frame and method of the real-time adaptive transmission of skill training data based on user's performance is monitored by performance monitoring hardware
CN108462707B (en) * 2018-03-13 2020-08-28 中山大学 Mobile application identification method based on deep learning sequence analysis
CN109269483B (en) * 2018-09-20 2020-12-15 国家体育总局体育科学研究所 Calibration method, calibration system and calibration base station for motion capture node
CN110516389B (en) * 2019-08-29 2021-04-13 腾讯科技(深圳)有限公司 Behavior control strategy learning method, device, equipment and storage medium

Also Published As

Publication number Publication date
CN101441776A (en) 2009-05-27

Similar Documents

Publication Publication Date Title
CN101441776B (en) Three-dimensional human body motion editing method driven by demonstration show based on speedup sensor
CN109685848A (en) A kind of neural network coordinate transformation method of three-dimensional point cloud and three-dimension sensor
CN101751692A (en) Method for voice-driven lip animation
CN110065068A (en) A kind of robotic asssembly operation programming by demonstration method and device based on reverse-engineering
CN103473801A (en) Facial expression editing method based on single camera and motion capturing data
CN103778661B (en) A kind of method, system and computer for generating speaker's three-dimensional motion model
CN103823554A (en) Digital virtual-real interaction system and digital virtual-real interaction method
CN107024989A (en) A kind of husky method for making picture based on Leap Motion gesture identifications
Nandy et al. Recognizing & interpreting Indian sign language gesture for human robot interaction
CN105243375A (en) Motion characteristics extraction method and device
CN105500370A (en) Robot offline teaching programming system and method based on somatosensory technology
CN111079547B (en) Pedestrian moving direction identification method based on mobile phone inertial sensor
CN117032453A (en) Virtual reality interaction system for realizing mutual recognition function
CN111159872A (en) Three-dimensional assembly process teaching method and system based on human-machine engineering simulation analysis
CN109910004A (en) User interaction approach, control equipment and storage medium
CN112365580A (en) Virtual operation demonstration system for human-computer skill teaching
CN1466104A (en) Statistics and rule combination based phonetic driving human face carton method
CN104463968B (en) The matching of remote sensing image binocular stereo vision and three-dimensional rebuilding method based on power grid GIS three-dimensional platform
CN106022466A (en) Personalized robot and method for realizing the personalization of the robot
CN110000753A (en) User interaction approach, control equipment and storage medium
CN109886109B (en) Behavior identification method based on deep learning
CN109807898A (en) Motion control method, control equipment and storage medium
CN1952850A (en) Three-dimensional face cartoon method driven by voice based on dynamic elementary access
Wang et al. Robot gaining robust pouring skills through fusing vision and audio
CN107692669A (en) A kind of artificial intelligence baby accompanies and attends to robot and its method of accompanying and attending to

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
C14 Grant of patent or utility model
GR01 Patent grant
C17 Cessation of patent right
CF01 Termination of patent right due to non-payment of annual fee

Granted publication date: 20101229

Termination date: 20121204