CN109344706A - It is a kind of can one man operation human body specific positions photo acquisition methods - Google Patents
It is a kind of can one man operation human body specific positions photo acquisition methods Download PDFInfo
- Publication number
- CN109344706A CN109344706A CN201810988161.4A CN201810988161A CN109344706A CN 109344706 A CN109344706 A CN 109344706A CN 201810988161 A CN201810988161 A CN 201810988161A CN 109344706 A CN109344706 A CN 109344706A
- Authority
- CN
- China
- Prior art keywords
- posture
- data
- human body
- image
- rgb image
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V40/00—Recognition of biometric, human-related or animal-related patterns in image or video data
- G06V40/10—Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
- G06V40/103—Static body considered as a whole, e.g. static pedestrian or occupant recognition
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/40—Extraction of image or video features
- G06V10/44—Local feature extraction by analysis of parts of the pattern, e.g. by detecting edges, contours, loops, corners, strokes or intersections; Connectivity analysis, e.g. of connected components
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/40—Extraction of image or video features
- G06V10/56—Extraction of image or video features relating to colour
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/70—Arrangements for image or video recognition or understanding using pattern recognition or machine learning
- G06V10/74—Image or video pattern matching; Proximity measures in feature spaces
- G06V10/75—Organisation of the matching processes, e.g. simultaneous or sequential comparisons of image or video features; Coarse-fine approaches, e.g. multi-scale approaches; using context analysis; Selection of dictionaries
- G06V10/751—Comparing pixel values or logical combinations thereof, or feature values having positional relevance, e.g. template matching
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Multimedia (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Artificial Intelligence (AREA)
- Health & Medical Sciences (AREA)
- Human Computer Interaction (AREA)
- Computing Systems (AREA)
- Databases & Information Systems (AREA)
- Evolutionary Computation (AREA)
- General Health & Medical Sciences (AREA)
- Medical Informatics (AREA)
- Software Systems (AREA)
- Image Analysis (AREA)
Abstract
The invention discloses it is a kind of can one man operation human body specific positions photo acquisition methods, the present invention is by the RGB image under any background of standard, extract the appearance profile of human body, obtained standard gestures contour images are analyzed to obtain human skeleton characteristic point, and distance and angle by calculating corresponding skeleton point obtain the related data of standard gestures;Obtained standard gestures related data is subjected to analysis and template matching;The RGB image of acquisition human posture in real time, the extraction of posture profile is carried out by posture detection algorithm described in step 1 and step 2 to collected RGB image;A front surface and a side surface posture contour images of acquisition are converted into data and import database data then return human body three-dimensional figure data.Deep learning function, data may be implemented to be finely adjusted current human's posture contour images after training, reduce and repeat to detect and may insure that precision is persistently promoted.
Description
Technical field
The invention belongs to the data collecting fields of human 3d model, and in particular to one kind is based on intelligent image processing and deeply
Spend the three-dimensional (3 D) manikin data acquisition technology of learning algorithm.
Background technique
With the quick development of image processing techniques and deep learning, human body three-dimensional modeling technique is in clothes, trip
It is applied in the industries such as play, human-computer interaction, safety engineering, long-range presentation and health care.However, can quick obtaining human body
The technical research of three-dimensional Body Profile data be also far from satisfying demand at present.The side of human bodily form's data is obtained currently on the market
Case there are several types of:
1. being directly acquired by human body three-dimensional scanner, the mode of this method acquisition data is relatively more direct but three-dimensional is swept
The speed for retouching instrument equipment valuableness and human body three-dimensional modeling is very slow, and user experience is bad.
2. obtaining the contour images of human body by video camera, then the three-dimensional Body Profile data of human body is restored by related algorithm.
Although this mode obtains image data comparatively fast but user is needed to input height and body weight parameters, and user is needed to provide pure
Photo under color background, and the image error obtained is larger and imperfect.
Summary of the invention
In view of the deficiencies of the prior art, the present invention proposes it is a kind of can one man operation human body specific positions photo acquisition
Method.
One kind of the present invention can one man operation human body specific positions photo acquisition methods, this method specifically includes following step
It is rapid:
Step 1: by the RGB image under any background of standard, the appearance profile of human body is extracted;
Step 1.1: the RGB image that will acquire is converted into gray level image;
The gray scale processing method are as follows:
X (i, j)=AxR(i,j)+BxG(i,j)+CxB(i,j)
Wherein x (i, j) is the gray value of the i-th column of image after making gradation conversion, jth row pixel, xR(i, j) is original RGB
Image i-th column, jth row pixel red color component value, xG(i, j) is the green of the i-th column of former RGB image, jth row pixel
Component value, xB(i, j) is the blue color component value of the i-th column of former RGB image, jth row pixel;A, B, C respectively indicate red, green
With the gamma of blue channel;
Step 1.2: obtained gray level image is subjected to thresholding processing;
The thresholding processing method are as follows:
Wherein src (x, y) indicates that gray value of the gray level image at coordinate points (x, y), thresh are the threshold value of setting,
Value range is [0,255];Obtain the appearance profile of human body;
Step 2: obtained standard gestures contour images are analyzed to obtain human skeleton characteristic point, and pass through meter
The distance and angle for calculating corresponding skeleton point obtain the related data of standard gestures;
The principle of the human skeleton feature point extraction is as follows:
(1) automatically generates 25 human body skeleton character points according to the posture contour images and human morphology feature of standard,
(2) calculates the distance between crucial skeleton point and angle obtains the posture related data of standard;
In order to guarantee the accuracy of gesture recognition, given pose is carried out by calculating the angle between artis line
Identification;Using the Euclidean distance and the cosine law between certain two artis, the angle between two artis is obtained;Assuming that having
Point x and point y, then the Euclidean distance between this two o'clock is just are as follows:
Angle is obtained by the cosine law are as follows:
∠=cos-1(a2+b2-c2)-(2ab)
Wherein a, b, c are respectively the linear distance between three points;
Step 3: obtained standard gestures related data is subjected to analysis and template matching;
The training that related data is carried out according to the standard gestures image that deep learning principle imports under different background, opposite
Error is automatically adjusted its pairing approximation standard gestures image outline in the range of allowing;
The deep learning principle is as follows:
(1) builds convolutional neural networks model;
(2) defines a cost function according to training data;
(3) finds out optimal function according to the result of two step of front;
Step 4: Image Acquisition terminal is placed on bracket, adjusts distance both horizontally and vertically;
Step 5: subscriber station is under any background and apart from terminal specified distance, and it is then logical to set specific posture
Voice control terminal is crossed to start to carry out the RGB image of Image Acquisition human body;
Step 6: the RGB image of acquisition human posture in real time passes through posture detection algorithm described in step 1 and step 2
The extraction of posture profile is carried out to collected RGB image;
Step 7: obtained posture contour images are finely adjusted by deep learning;
Step 8: the posture contour images after fine tuning are analyzed to obtain human skeleton characteristic point, and pass through calculating
The distance and angle of corresponding skeleton point obtain the related data of current posture;
Step 9: the related data of current posture and standard gestures related data are compared and analyzed;
Step 10: relevant prompt is carried out to user according to the contrast difference of data, complies with standard its posture;
Step 11: user images acquisition is prompted to terminate after collecting with standard gestures contour images;
Step 12: posture contour images each one of acquisition user's a front surface and a side surface;
Step 13: a front surface and a side surface posture contour images of acquisition are converted into data importing database data and are then returned
The Huis' body three-dimensional figure data.
The present invention compared with the existing technology the utility model has the advantages that
1. the requirement of the equipment and background of pair Image Acquisition is very low, common mobile phone and camera can meet demand, and
Easy to operate, voice prompting is fine to the experience of user.
2. the algorithm used is advanced, strong antijamming capability, human bioequivalence rate are high, can be quasi- under unfixed background
Really identifies human body and obtain the contour images of current posture.
3. deep learning function, data may be implemented to be finely adjusted current human's posture contour images after training, subtract
It is few to repeat to detect and may insure that precision is persistently promoted.
4. real time image collection simultaneously notifies user to be adjusted in time.
5. powerful database support, the modeling of database combination somatotype, data convert ability are strong.
Detailed description of the invention
Fig. 1 is flow chart of the invention.
Specific embodiment
As shown in Figure 1, one kind of the present invention can one man operation human body specific positions photo acquisition methods, this method is specific
The following steps are included:
Step 1: by the RGB image under any background of standard, the appearance profile of human body is extracted;
Step 1.1: the RGB image that will acquire is converted into gray level image;
The gray scale processing method are as follows:
X (i, j)=0.30xR(i,j)+0.59xG(i,j)+0.11xB(i,j)
Wherein x (i, j) is the gray value of the i-th column of image after making gradation conversion, jth row pixel, xR(i, j) is original RGB
Image i-th column, jth row pixel red color component value, xG(i, j) is the green of the i-th column of former RGB image, jth row pixel
Component value, xB(i, j) is the blue color component value of the i-th column of former RGB image, jth row pixel;
Step 1.2: obtained gray level image is subjected to thresholding processing;
The thresholding processing method are as follows:
Wherein src (x, y) indicates that gray value of the gray level image at coordinate points (x, y), thresh are the threshold value of setting,
Value range is [0,255];Obtain the appearance profile of human body;
Step 2: obtained standard gestures contour images are analyzed to obtain human skeleton characteristic point, and pass through meter
The distance and angle for calculating corresponding skeleton point obtain the related data of standard gestures;
The principle of the human skeleton feature point extraction is as follows:
(1) automatically generates 25 human body skeleton character points according to the posture contour images and human morphology feature of standard,
(2) calculates the distance between crucial skeleton point and angle obtains the posture related data of standard;
In order to guarantee the accuracy of gesture recognition, given pose is carried out by calculating the angle between artis line
Identification;Using the Euclidean distance and the cosine law between certain two artis, the angle between two artis is obtained;Assuming that having
Point x and point y, then the Euclidean distance between this two o'clock is just are as follows:
Angle is obtained by the cosine law are as follows:
∠=cos-1(a2+b2-c2)-(2ab)
Wherein a, b, c are respectively the linear distance between three points;
Step 3: obtained standard gestures related data is subjected to analysis and template matching;
The training that related data is carried out according to the standard gestures image that deep learning principle imports under different background, opposite
Error is automatically adjusted its pairing approximation standard gestures image outline in the range of allowing;
The deep learning principle is as follows:
(1) builds convolutional neural networks model;
(2) defines a cost function according to training data;
(3) finds out optimal function according to the result of two step of front;
Step 4: Image Acquisition terminal is placed on bracket, adjusts distance both horizontally and vertically;
Step 5: subscriber station is under any background and apart from terminal specified distance, and it is then logical to set specific posture
Voice control terminal is crossed to start to carry out the RGB image of Image Acquisition human body;
Step 6: the RGB image of acquisition human posture in real time passes through posture detection algorithm described in step 1 and step 2
The extraction of posture profile is carried out to collected RGB image;
Step 7: obtained posture contour images are finely adjusted by deep learning;
Step 8: the posture contour images after fine tuning are analyzed to obtain human skeleton characteristic point, and pass through calculating
The distance and angle of corresponding skeleton point obtain the related data of current posture;
Step 9: the related data of current posture and standard gestures related data are compared and analyzed;
Step 10: relevant prompt is carried out to user according to the contrast difference of data, complies with standard its posture;
Step 11: user images acquisition is prompted to terminate after collecting with standard gestures contour images;
Step 12: posture contour images each one of acquisition user's a front surface and a side surface;
Step 13: a front surface and a side surface posture contour images of acquisition are converted into data importing database data and are then returned
The Huis' body three-dimensional figure data.
Claims (1)
1. one kind can one man operation human body specific positions photo acquisition methods, which is characterized in that this method specifically include with
Lower step:
Step 1: by the RGB image under any background of standard, the appearance profile of human body is extracted;
Step 1.1: the RGB image that will acquire is converted into gray level image;
The gray scale processing method are as follows:
X (i, j)=AxR(i,j)+BxG(i,j)+CxB(i,j)
Wherein x (i, j) is the gray value of the i-th column of image after making gradation conversion, jth row pixel, xR(i, j) is former RGB image
The red color component value of i-th column, jth row pixel, xG(i, j) is the green component values of the i-th column of former RGB image, jth row pixel, xB
(i, j) is the blue color component value of the i-th column of former RGB image, jth row pixel;A, B, C respectively indicate red, green and blue and lead to
The gamma in road;
Step 1.2: obtained gray level image is subjected to thresholding processing;
The thresholding processing method are as follows:
Wherein src (x, y) indicates that gray value of the gray level image at coordinate points (x, y), thresh are the threshold value of setting, value
Range is [0,255];Obtain the appearance profile of human body;
Step 2: obtained standard gestures contour images are analyzed to obtain human skeleton characteristic point, and by calculating phase
The distance and angle for answering skeleton point obtain the related data of standard gestures;
The principle of the human skeleton feature point extraction is as follows:
(1) automatically generates 25 human body skeleton character points according to the posture contour images and human morphology feature of standard,
(2) calculates the distance between crucial skeleton point and angle obtains the posture related data of standard;
In order to guarantee the accuracy of gesture recognition, the knowledge of given pose is carried out by calculating the angle between artis line
Not;Using the Euclidean distance and the cosine law between certain two artis, the angle between two artis is obtained;Assuming that a little
X and point y, then the Euclidean distance between this two o'clock is just are as follows:
Angle is obtained by the cosine law are as follows:
∠=cos-1(a2+b2-c2)-(2ab)
Wherein a, b, c are respectively the linear distance between three points;
Step 3: obtained standard gestures related data is subjected to analysis and template matching;
The training that related data is carried out according to the standard gestures image that deep learning principle imports under different background, in relative error
It is automatically adjusted its pairing approximation standard gestures image outline in the range of permission;
The deep learning principle is as follows:
(1) builds convolutional neural networks model;
(2) defines a cost function according to training data;
(3) finds out optimal function according to the result of two step of front;
Step 4: Image Acquisition terminal is placed on bracket, adjusts distance both horizontally and vertically;
Step 5: subscriber station sets specific posture and then passes through language under any background and apart from terminal specified distance
Sound controlling terminal starts to carry out the RGB image of Image Acquisition human body;
Step 6: the RGB image of acquisition human posture in real time, by posture detection algorithm described in step 1 and step 2 to adopting
The RGB image collected carries out the extraction of posture profile;
Step 7: obtained posture contour images are finely adjusted by deep learning;
Step 8: the posture contour images after fine tuning are analyzed to obtain human skeleton characteristic point, and corresponding by calculating
The distance and angle of skeleton point obtain the related data of current posture;
Step 9: the related data of current posture and standard gestures related data are compared and analyzed;
Step 10: relevant prompt is carried out to user according to the contrast difference of data, complies with standard its posture;
Step 11: user images acquisition is prompted to terminate after collecting with standard gestures contour images;
Step 12: posture contour images each one of acquisition user's a front surface and a side surface;
Step 13: a front surface and a side surface posture contour images of acquisition are converted into data importing database data and then return to people
Body three-dimensional figure data.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201810988161.4A CN109344706A (en) | 2018-08-28 | 2018-08-28 | It is a kind of can one man operation human body specific positions photo acquisition methods |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201810988161.4A CN109344706A (en) | 2018-08-28 | 2018-08-28 | It is a kind of can one man operation human body specific positions photo acquisition methods |
Publications (1)
Publication Number | Publication Date |
---|---|
CN109344706A true CN109344706A (en) | 2019-02-15 |
Family
ID=65296785
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201810988161.4A Pending CN109344706A (en) | 2018-08-28 | 2018-08-28 | It is a kind of can one man operation human body specific positions photo acquisition methods |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN109344706A (en) |
Cited By (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN110569775A (en) * | 2019-08-30 | 2019-12-13 | 武汉纺织大学 | Method, system, storage medium and electronic device for recognizing human body posture |
CN110874851A (en) * | 2019-10-25 | 2020-03-10 | 深圳奥比中光科技有限公司 | Method, device, system and readable storage medium for reconstructing three-dimensional model of human body |
CN110991292A (en) * | 2019-11-26 | 2020-04-10 | 爱菲力斯(深圳)科技有限公司 | Action identification comparison method and system, computer storage medium and electronic device |
CN112052786A (en) * | 2020-09-03 | 2020-12-08 | 上海工程技术大学 | Behavior prediction method based on grid division skeleton |
CN112149455A (en) * | 2019-06-26 | 2020-12-29 | 北京京东尚科信息技术有限公司 | Method and device for detecting human body posture |
CN112446433A (en) * | 2020-11-30 | 2021-03-05 | 北京数码视讯技术有限公司 | Method and device for determining accuracy of training posture and electronic equipment |
Citations (8)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN101996311A (en) * | 2009-08-10 | 2011-03-30 | 深圳泰山在线科技有限公司 | Yoga stance recognition method and system |
US20150051512A1 (en) * | 2013-08-16 | 2015-02-19 | Electronics And Telecommunications Research Institute | Apparatus and method for recognizing user's posture in horse-riding simulator |
CN105448053A (en) * | 2015-12-02 | 2016-03-30 | 广东小天才科技有限公司 | Posture reminding method and system |
CN106056053A (en) * | 2016-05-23 | 2016-10-26 | 西安电子科技大学 | Human posture recognition method based on skeleton feature point extraction |
CN107016721A (en) * | 2017-03-07 | 2017-08-04 | 上海优裁信息技术有限公司 | The modeling method of human 3d model |
CN107050774A (en) * | 2017-05-17 | 2017-08-18 | 上海电机学院 | A kind of body-building action error correction system and method based on action collection |
CN107886069A (en) * | 2017-11-10 | 2018-04-06 | 东北大学 | A kind of multiple target human body 2D gesture real-time detection systems and detection method |
CN108334816A (en) * | 2018-01-15 | 2018-07-27 | 桂林电子科技大学 | The Pose-varied face recognition method of network is fought based on profile symmetry constraint production |
-
2018
- 2018-08-28 CN CN201810988161.4A patent/CN109344706A/en active Pending
Patent Citations (8)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN101996311A (en) * | 2009-08-10 | 2011-03-30 | 深圳泰山在线科技有限公司 | Yoga stance recognition method and system |
US20150051512A1 (en) * | 2013-08-16 | 2015-02-19 | Electronics And Telecommunications Research Institute | Apparatus and method for recognizing user's posture in horse-riding simulator |
CN105448053A (en) * | 2015-12-02 | 2016-03-30 | 广东小天才科技有限公司 | Posture reminding method and system |
CN106056053A (en) * | 2016-05-23 | 2016-10-26 | 西安电子科技大学 | Human posture recognition method based on skeleton feature point extraction |
CN107016721A (en) * | 2017-03-07 | 2017-08-04 | 上海优裁信息技术有限公司 | The modeling method of human 3d model |
CN107050774A (en) * | 2017-05-17 | 2017-08-18 | 上海电机学院 | A kind of body-building action error correction system and method based on action collection |
CN107886069A (en) * | 2017-11-10 | 2018-04-06 | 东北大学 | A kind of multiple target human body 2D gesture real-time detection systems and detection method |
CN108334816A (en) * | 2018-01-15 | 2018-07-27 | 桂林电子科技大学 | The Pose-varied face recognition method of network is fought based on profile symmetry constraint production |
Cited By (7)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN112149455A (en) * | 2019-06-26 | 2020-12-29 | 北京京东尚科信息技术有限公司 | Method and device for detecting human body posture |
CN110569775A (en) * | 2019-08-30 | 2019-12-13 | 武汉纺织大学 | Method, system, storage medium and electronic device for recognizing human body posture |
CN110874851A (en) * | 2019-10-25 | 2020-03-10 | 深圳奥比中光科技有限公司 | Method, device, system and readable storage medium for reconstructing three-dimensional model of human body |
CN110991292A (en) * | 2019-11-26 | 2020-04-10 | 爱菲力斯(深圳)科技有限公司 | Action identification comparison method and system, computer storage medium and electronic device |
CN112052786A (en) * | 2020-09-03 | 2020-12-08 | 上海工程技术大学 | Behavior prediction method based on grid division skeleton |
CN112052786B (en) * | 2020-09-03 | 2023-08-22 | 上海工程技术大学 | Behavior prediction method based on grid division skeleton |
CN112446433A (en) * | 2020-11-30 | 2021-03-05 | 北京数码视讯技术有限公司 | Method and device for determining accuracy of training posture and electronic equipment |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN109344706A (en) | It is a kind of can one man operation human body specific positions photo acquisition methods | |
CN108921100B (en) | Face recognition method and system based on visible light image and infrared image fusion | |
CN103218605B (en) | A kind of fast human-eye positioning method based on integral projection and rim detection | |
CN111414780B (en) | Real-time intelligent sitting posture distinguishing method, system, equipment and storage medium | |
CN110991266B (en) | Binocular face living body detection method and device | |
CN101561710B (en) | Man-machine interaction method based on estimation of human face posture | |
CN103810478B (en) | Sitting posture detection method and device | |
CN106446779B (en) | Personal identification method and device | |
CN103325089B (en) | Colour of skin processing method and processing device in image | |
CN107180234A (en) | The credit risk forecast method extracted based on expression recognition and face characteristic | |
CN104200480A (en) | Image fuzzy degree evaluation method and system applied to intelligent terminal | |
CN103440633B (en) | A kind of digital picture dispels the method for spot automatically | |
CN105930795A (en) | Walking state identification method based on space vector between human body skeleton joints | |
CN104599297B (en) | A kind of image processing method for going up blush automatically to face | |
CN110263768A (en) | A kind of face identification method based on depth residual error network | |
CN105740779A (en) | Method and device for human face in-vivo detection | |
TWI625678B (en) | Electronic device and gesture recognition method applied therein | |
CN106572304A (en) | Blink detection-based smart handset photographing system and method | |
CN103279188A (en) | Method for operating and controlling PPT in non-contact mode based on Kinect | |
CN109274883A (en) | Posture antidote, device, terminal and storage medium | |
CN110796101A (en) | Face recognition method and system of embedded platform | |
CN105138990A (en) | Single-camera-based gesture convex hull detection and palm positioning method | |
CN102831408A (en) | Human face recognition method | |
CN103903256B (en) | Depth estimation method based on relative height-depth clue | |
CN110456904B (en) | Augmented reality glasses eye movement interaction method and system without calibration |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
RJ01 | Rejection of invention patent application after publication |
Application publication date: 20190215 |
|
RJ01 | Rejection of invention patent application after publication |