[go: up one dir, main page]
More Web Proxy on the site http://driver.im/

CN103440277A - Action model feature library and construction method thereof - Google Patents

Action model feature library and construction method thereof Download PDF

Info

Publication number
CN103440277A
CN103440277A CN2013103476424A CN201310347642A CN103440277A CN 103440277 A CN103440277 A CN 103440277A CN 2013103476424 A CN2013103476424 A CN 2013103476424A CN 201310347642 A CN201310347642 A CN 201310347642A CN 103440277 A CN103440277 A CN 103440277A
Authority
CN
China
Prior art keywords
action
sample
proper vector
eigenvectors
sample action
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN2013103476424A
Other languages
Chinese (zh)
Inventor
陈拥权
张羽
李梁
胡翀豪
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Hefei Huanjing Information Technology Co Ltd
Original Assignee
Hefei Huanjing Information Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Hefei Huanjing Information Technology Co Ltd filed Critical Hefei Huanjing Information Technology Co Ltd
Priority to CN2013103476424A priority Critical patent/CN103440277A/en
Publication of CN103440277A publication Critical patent/CN103440277A/en
Pending legal-status Critical Current

Links

Images

Landscapes

  • Image Analysis (AREA)

Abstract

The invention discloses an action model feature library and a construction method thereof. The action model feature library comprises an action sample feature vector set and action pattern classifiers, wherein the action sample feature vector set consists of action sample feature vectors, the action sample feature vectors are {v 1, a 1, ellipsis, v J, a J, ellipsis, v K-2, a K-2}, P k is the position of an object, v J is the speed of the object, a J is the accelerated speed of the object, K is an integer greater than 2, k is a positive integer not greater than K, and j is a positive integer not greater than K-2. The construction method of the action model feature library comprises the steps: for every action pattern, collecting corresponding sample trajectory data, extracting the action sample feature vectors from the trajectory data to obtain a feature vector set of the corresponding action pattern; using the vector set training classifiers of all the action patterns to complete the mapping between the feature vectors of the action patterns and the action patterns. The difficulty and the complexity of action recognition and simulation are reduced, and the application cost is reduced.

Description

A kind of action model feature database and construction method thereof
[technical field]
The present invention relates to a kind of action model feature database and construction method thereof that is applied in the fields such as digital virtual motion, industrial simulation, science popularization interaction, interactive teaching, athletic rehabilitation.
[background technology]
Current, the motion analysis based on vision is a very active research field, and it carries out motion detection, target classification, tracking and exercises are understood and identified for image sequence.Understanding and the identification of action, belong to the advanced processes part of motion analysis, more and more receives in recent years people's concern.Action recognition, can regard the classification problem of a time-variable data simply as, comprises identification two parts of expression and the action of action.Action recognition research based on vision had both comprised the knowledge such as image processing and computer vision, had also related to the theory of pattern-recognition and artificial intelligence, was the research direction of a multidisciplinary intersection.Yet the diversity of objective environment and the complicacy of action make action recognition become very difficult.
[summary of the invention]
The technical problem to be solved in the present invention is to provide a kind of action model feature database based on computer vision, utilize computing machine and visually-perceptible disposal system, realize extraction, the classification of action model feature, the instant restoring of the quick identification of action and digital virtual action.
Another technical matters that the present invention will solve is to provide a kind of construction method of above-mentioned action model feature database.
For the action model feature database, the technical solution used in the present invention is that a kind of action model feature database, comprise the sample action set of eigenvectors, the sample action set of eigenvectors is comprised of the sample action proper vector, and the sample action proper vector is { v1, a1,, vJ, aJ,, vK-2, aK-2}, the position that wherein Pk is target, the speed that vJ is target, the acceleration that aJ is target, here K is greater than 2 integer, and k is the positive integer that is not more than K, and j is the positive integer that is not more than K-2.
As preferably, when Pk is two-dimensional points, when its coordinate is (xk, yk), proper vector is set to
Figure BSA0000093717620000021
Figure BSA0000093717620000022
here ( v x J , v y J ) = ( x J + 1 - x j , y J + 1 - y j ) , ( a x j , a y j ) = ( v x j + 1 - v x j , v y j + 1 - v y j ) .
As preferably, when PK is three-dimensional point, when its coordinate is (xk, yk, zk), proper vector is set to
Figure BSA0000093717620000025
Figure BSA0000093717620000026
here ( v x J , v y J , v z J ) = ( x J + 1 - x j , y J + 1 - y j , z J + 1 - z j ) , ( a x j , a y j , a z j ) = ( v x j + 1 - v x j , v y j + 1 - v y j , v z j + 1 - v z j ) .
For the construction method of action model feature database, the technical solution used in the present invention is to comprise the following steps:
(1) to the track data of each sample action of collecting, carry out first difference and second order difference item by item, obtain the velocity and acceleration data, form the sample action proper vector.These sample action combination of eigenvectors become the sample action set of eigenvectors;
(2) that establishes pattern has a C type, according to the sample action set of eigenvectors, use machine learning method training action pattern classifier, the sample action set of eigenvectors is divided into to C+1 classification, make the sample action proper vector that represents every kind of pattern be distributed in different classification.Specification area is divided according to the numerical value of proper vector, has set up a kind of mapping relations from the characteristic vector space to the classification.
The invention has the beneficial effects as follows:
By to magnanimity standard operation data by analysis with abstract, the characteristics that wherein characterize different actions are extracted, these data are arranged and classification, build the set of a standard movement characteristic.This mode can be simplified difficulty and the complexity of action recognition and emulation greatly, makes action recognition and emulation technology be applied at lower cost the fields such as digital virtual motion, industrial simulation, science popularization interaction, interactive teaching, athletic rehabilitation.
[accompanying drawing explanation]
Below in conjunction with the drawings and specific embodiments, the present invention is further detailed explanation.
Fig. 1 is the α angle schematic diagram of hand position in the embodiment of the present invention.
Fig. 2 is the β angle schematic diagram of hand position in the embodiment of the present invention.
Fig. 3 is the flow chart of data processing figure of the embodiment of the present invention.
[embodiment]
The mode of operation of the present embodiment is: from recognizing model of movement and Fusion module, obtain the movable information of crossing through multisensor data fusion processing, cross the motion feature alignment algorithm with this information exchange and compare, judge the body sense state of these information representatives.
Above-mentioned motion feature obtains according to following mode:
For each pattern, gather corresponding sample, extract the proper vector of each sample, obtain the set of eigenvectors of this pattern.Use the vector set training classifier of each pattern, the proper vector of execution pattern and the mapping between pattern.
Specifically establish (P1 ..., Pk ... the sample that PK} is certain pattern, the proper vector of this pattern is { v1, a1,, vJ, aJ,, vK-2, aK-2}, the position that wherein Pk is target, the speed that vJ is target, the acceleration that aJ is target, here K is greater than 2 integer, and k is the positive integer that is not more than K, and j is the positive integer that is not more than K-2.When Pk is two-dimensional points (picture point), when its coordinate is (xk, yk), proper vector is set to
Figure BSA0000093717620000031
Figure BSA0000093717620000032
here ( v x J , v y J ) = ( x J + 1 - x j , y J + 1 - y j ) , ( a x j , a y j ) = ( v x j + 1 - v x j , v y j + 1 - v y j ) ; As PK, while being its coordinate of three-dimensional point (physical points) for (xk, yk, zk), proper vector is set to
Figure BSA0000093717620000041
here ( v x J , v y J , v z J ) = ( x J + 1 - x j , y J + 1 - y j , z J + 1 - z j ) , ( a x j , a y j , a z j ) = ( v x j + 1 - v x j , v y j + 1 - v y j , v z j + 1 - v z j ) .
At first gather the motion trace data of all kinds of patterns (as: be rotated counterclockwise interactive handle, interactive handle etc. turns clockwise) before training, these track datas are a series of spatial point, for follow-up training classifier provides necessary learning sample.Be rotated counterclockwise interactive handle action, 500 groups of interactive handle action C=2 class action N=1000 samples altogether that turn clockwise, the species number that C is pattern here such as gathering 500 groups.
Action gathers and the foundation in motion feature storehouse is undertaken by training classifier, and its workflow is summarized as follows:
(1) to the track data of each sample action of collecting, carry out first difference and second order difference item by item, obtain the velocity and acceleration data, form the sample action proper vector.These sample action combination of eigenvectors become the sample action set of eigenvectors;
(2) according to the sample action set of eigenvectors, use machine learning method training action pattern classifier, the sample action set of eigenvectors is divided into to C+1 classification, make the sample action proper vector that represents every kind of pattern be distributed in different classification.Specification area is divided according to the numerical value of proper vector, has set up a kind of mapping relations from the characteristic vector space to the classification.Training classifier can adopt the known method in various machine learning field.
The left hand information of take in the boxing project is example, and we are labeled as the B point by left hand, and the spatial position data that B is ordered so is (x b1, y b1, z b1), (x b2, y b2, z b2) ..., (x bi, y bi, z bi) ..., (x wherein bi, y bi, z bi) representing at B o'clock in the i locus in the moment.
For the convenience on describing, the spatial position data that we order B is rewritten as again: (x 1, y 1, z 1), (x 2, y 2, z 2) ..., (x i, y i, z i) ..., (x wherein i, y i, z i) representing the locus coordinate of i left hand constantly.
So the locus coordinate by left hand in each moment, can further obtain its movable information, as speed, acceleration, dynamics etc.The concrete formula that calculates each data is:
I speed formula constantly: v ix=(x i-x i-1)/t, v iy=(v i-v i-1)/t, v iz=(v i-v i-1)/t, f=1/FPSs wherein, the frequency acquisition that FPS is data, unit: inferior/second; Speed is the speed decomposed on three directions of x, y, z.
I Acceleration Formula constantly: a lx=(v ix-v (i-1) x)/t, a iy=(v iy-v (i-1) y)/t, a iz=(v iz-v (i-1) z)/t, f=1/FPSs wherein, the frequency acquisition that FPS is data, unit: inferior/second; Acceleration is the acceleration decomposed on three directions of x, y, z.
Acceleration on each moment all directions has been arranged, just the firmly situation on the correspondence moment and correspondence direction can have been made to analysis, f ix=a ix* k, f iy=a iy* k, f iz=a iz* k, k is a scale-up factor.
The action judgment means is contrasted above-mentioned movement locus and default parameter, judges rationality and the kind of described movement locus;
In default parameter, to a punching action, can roughly by following algorithm, define.
1, find the starting point of a punching action, this starting point at least should meet two conditions:
(1) direction of motion of track has the forward direction component;
(2) speed/acceleration of starting point reaches certain threshold value.
2, find the end point of a punching action.End point must meet three conditions:
(1) possessed corresponding with it starting point;
(2) speed/acceleration of end point reaches certain threshold value.
3,, by the track between starting point and end point and basic punch action Characteristic Contrast, to determine this, be once effectively punching action.
3 all rules above having met, just can determine a punching action.After having determined a punching action, the judgement of hand position be carried out to this fist, to determine whether it is straight punch.
The judgement of hand position can be weighed according to two angles:
1, the line of starting point and end point is born the angle α of axle at the projection Lxz on XZ plane with X-axis, as shown in Figure 1.
2, the line of starting point and end point at the projection Lyz on YZ plane the angle β with the Y-axis positive axis, as shown in Figure 2.
3, the deviate between the track between starting point and end point and its line.
Simply, only consider 1 and 2 these 2 just passable, angle α and the angle β of every kind of hand position are different, for the ease of understanding, can think that haply the scope of every two kinds of hand positions is non-intersect, such as:
Straight punch: α [7O, 110] β (4O, 180]
Hook fist: α [O, 180] β [0,40]
After the angle α that calculates each punching action and angle β, judge respectively whether they drop in the angular range of straight punch in default parameter, if, be judged to be straight punch.
In actual applications, top comparison behavior is completed by concrete alignment algorithm, and its flow chart of data processing as shown in Figure 3.
In the computing machine of windows system, such alignment algorithm exists with the form of dynamic base.When the user enters boxing, the straight punch alignment algorithm is loaded, and when receiving concrete user movement data, the straight punch alignment algorithm is driven, carries out the judgement of straight punch action.If user's action at that time meets the feature of straight punch action, the straight punch alignment algorithm will be notified upper layer application so, and restore the characteristic parameter of this straight punch action, such as speed, direction etc.Upper layer application is by mode more intuitively, such as visualization interface, 3D role's action etc., the result of the action informed to the user.

Claims (5)

1. an action model feature database, comprise the sample action set of eigenvectors, and described sample action set of eigenvectors is comprised of the sample action proper vector, it is characterized in that, described sample action proper vector is { v1, a1,, vJ, aJ,, vK-2, aK-2}, the position that wherein Pk is target, the speed that vJ is target, the acceleration that aJ is target, here K is greater than 2 integer, and k is the positive integer that is not more than K, and j is the positive integer that is not more than K-2.
2. action model feature database according to claim 1, also comprise the pattern sorter, and described pattern sorter is trained by described sample action set of eigenvectors, and its effect is the sample action proper vector of distinguishing different mode.
3. action model feature database according to claim 1, is characterized in that, described Pk is two-dimensional points, and when its coordinate is (xk, yk), proper vector is set to
Figure FSA0000093717610000011
Figure FSA0000093717610000012
here ( v x J , v y J ) = ( x J + 1 - x j , y J + 1 - y j ) , ( a x j , a y j ) = ( v x j + 1 - v x j , v y j + 1 - v y j ) .
4. action model feature database according to claim 1, is characterized in that, described PK is three-dimensional point, and when its coordinate is (xk, yk, zk), proper vector is set to
Figure FSA0000093717610000015
Figure FSA0000093717610000016
here ( v x J , v y J , v z J ) = ( x J + 1 - x j , y J + 1 - y j , z J + 1 - z j ) , ( a x j , a y j , a z j ) = ( v x j + 1 v x j , v y j + 1 - v y j , v z j + 1 - v z j ) .
5. the construction method of the described action model feature database of claim 1, is characterized in that, comprises the following steps:
(1) to the track data of each sample action of collecting, carry out first difference and second order difference item by item, obtain the velocity and acceleration data, form the sample action proper vector.These sample action combination of eigenvectors become the sample action set of eigenvectors;
(2) establish pattern C type is arranged, according to the sample action set of eigenvectors, use machine learning method training action pattern classifier, the sample action set of eigenvectors is divided into to C+1 classification, make the sample action proper vector that represents every kind of pattern be distributed in different classification.Specification area is divided according to the numerical value of proper vector, has set up a kind of mapping relations from the characteristic vector space to the classification.
CN2013103476424A 2013-08-12 2013-08-12 Action model feature library and construction method thereof Pending CN103440277A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN2013103476424A CN103440277A (en) 2013-08-12 2013-08-12 Action model feature library and construction method thereof

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN2013103476424A CN103440277A (en) 2013-08-12 2013-08-12 Action model feature library and construction method thereof

Publications (1)

Publication Number Publication Date
CN103440277A true CN103440277A (en) 2013-12-11

Family

ID=49693969

Family Applications (1)

Application Number Title Priority Date Filing Date
CN2013103476424A Pending CN103440277A (en) 2013-08-12 2013-08-12 Action model feature library and construction method thereof

Country Status (1)

Country Link
CN (1) CN103440277A (en)

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105617638A (en) * 2015-12-25 2016-06-01 深圳市酷浪云计算有限公司 Badminton racket swinging movement recognizing method and device
CN108021883A (en) * 2017-12-04 2018-05-11 深圳市赢世体育科技有限公司 The method, apparatus and storage medium of sphere recognizing model of movement
CN111028339A (en) * 2019-12-06 2020-04-17 国网浙江省电力有限公司培训中心 Behavior action modeling method and device, electronic equipment and storage medium
WO2020164400A1 (en) * 2019-02-12 2020-08-20 阿里巴巴集团控股有限公司 Method for determining clothing quality inspection status, method and apparatus for determining action status, and electronic device

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JPH0962847A (en) * 1995-08-28 1997-03-07 Sanyo Electric Co Ltd Motion vector detection circuit and object tracking camera device using the same
CN101158883A (en) * 2007-10-09 2008-04-09 深圳先进技术研究院 Virtual gym system based on computer visual sense and realize method thereof
CN101389004A (en) * 2007-09-13 2009-03-18 中国科学院自动化研究所 Moving target classification method based on on-line study
JP2009163639A (en) * 2008-01-09 2009-07-23 Nippon Hoso Kyokai <Nhk> Object trajectory identification device, object trajectory identification method, and object trajectory identification program
CN101504728A (en) * 2008-10-10 2009-08-12 深圳先进技术研究院 Remote control system and method of electronic equipment

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JPH0962847A (en) * 1995-08-28 1997-03-07 Sanyo Electric Co Ltd Motion vector detection circuit and object tracking camera device using the same
CN101389004A (en) * 2007-09-13 2009-03-18 中国科学院自动化研究所 Moving target classification method based on on-line study
CN101158883A (en) * 2007-10-09 2008-04-09 深圳先进技术研究院 Virtual gym system based on computer visual sense and realize method thereof
JP2009163639A (en) * 2008-01-09 2009-07-23 Nippon Hoso Kyokai <Nhk> Object trajectory identification device, object trajectory identification method, and object trajectory identification program
CN101504728A (en) * 2008-10-10 2009-08-12 深圳先进技术研究院 Remote control system and method of electronic equipment

Cited By (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105617638A (en) * 2015-12-25 2016-06-01 深圳市酷浪云计算有限公司 Badminton racket swinging movement recognizing method and device
CN105617638B (en) * 2015-12-25 2019-04-05 深圳市酷浪云计算有限公司 Badminton racket swing action identification method and device
CN108021883A (en) * 2017-12-04 2018-05-11 深圳市赢世体育科技有限公司 The method, apparatus and storage medium of sphere recognizing model of movement
WO2020164400A1 (en) * 2019-02-12 2020-08-20 阿里巴巴集团控股有限公司 Method for determining clothing quality inspection status, method and apparatus for determining action status, and electronic device
CN111028339A (en) * 2019-12-06 2020-04-17 国网浙江省电力有限公司培训中心 Behavior action modeling method and device, electronic equipment and storage medium
CN111028339B (en) * 2019-12-06 2024-03-29 国网浙江省电力有限公司培训中心 Behavior modeling method and device, electronic equipment and storage medium

Similar Documents

Publication Publication Date Title
Cheng et al. Jointly network: a network based on CNN and RBM for gesture recognition
Rasouli Deep learning for vision-based prediction: A survey
CN108838991A (en) It is a kind of from main classes people tow-armed robot and its to the tracking operating system of moving target
CN104866860A (en) Indoor human body behavior recognition method
Wang et al. MASD: A multimodal assembly skill decoding system for robot programming by demonstration
CN111176309B (en) Multi-unmanned aerial vehicle self-group mutual inductance understanding method based on spherical imaging
CN111104930A (en) Video processing method and device, electronic equipment and storage medium
Wang et al. Immersive human–computer interactive virtual environment using large-scale display system
Kassab et al. Real-time human-UAV interaction: New dataset and two novel gesture-based interacting systems
CN103440277A (en) Action model feature library and construction method thereof
Jais et al. A review on gesture recognition using Kinect
Yin et al. Overview of robotic grasp detection from 2D to 3D
Pfitscher et al. Article users activity gesture recognition on kinect sensor using convolutional neural networks and FastDTW for controlling movements of a mobile robot
Lingyun et al. Hierarchical attention-based astronaut gesture recognition: A dataset and CNN model
El-Sawah et al. A framework for 3D hand tracking and gesture recognition using elements of genetic programming
Ren et al. Fast-learning grasping and pre-grasping via clutter quantization and Q-map masking
Chaudhary et al. Controlling a swarm of unmanned aerial vehicles using full-body k-nearest neighbor based action classifier
Dhore et al. Human Pose Estimation And Classification: A Review
Van Toan et al. A hierarchical approach for updating targeted person states in human-following mobile robots
Alharbi et al. A data preprocessing technique for gesture recognition based on extended-kalman-filter
Trigueiros et al. Vision-based gesture recognition system for human-computer interaction
Kim et al. Visual multi-touch air interface for barehanded users by skeleton models of hand regions
Terzić et al. Biological models for active vision: Towards a unified architecture
Ardizzone et al. Pose classification using support vector machines
Raza et al. An integrative approach to robust hand detection using CPM-YOLOv3 and RGBD camera in real time

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
WD01 Invention patent application deemed withdrawn after publication

Application publication date: 20131211

WD01 Invention patent application deemed withdrawn after publication