[go: up one dir, main page]
More Web Proxy on the site http://driver.im/

CN108846359A - It is a kind of to divide the gesture identification method blended with machine learning algorithm and its application based on skin-coloured regions - Google Patents

It is a kind of to divide the gesture identification method blended with machine learning algorithm and its application based on skin-coloured regions Download PDF

Info

Publication number
CN108846359A
CN108846359A CN201810608459.8A CN201810608459A CN108846359A CN 108846359 A CN108846359 A CN 108846359A CN 201810608459 A CN201810608459 A CN 201810608459A CN 108846359 A CN108846359 A CN 108846359A
Authority
CN
China
Prior art keywords
gesture
skin
area
image
machine learning
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201810608459.8A
Other languages
Chinese (zh)
Inventor
周凯
万毅
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
College Of Science And Technology Xinjiang University
Original Assignee
College Of Science And Technology Xinjiang University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by College Of Science And Technology Xinjiang University filed Critical College Of Science And Technology Xinjiang University
Priority to CN201810608459.8A priority Critical patent/CN108846359A/en
Publication of CN108846359A publication Critical patent/CN108846359A/en
Pending legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/20Movements or behaviour, e.g. gesture recognition
    • G06V40/28Recognition of hand or arm movements, e.g. recognition of deaf sign language
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • G06F18/241Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches
    • G06F18/2411Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches based on the proximity to a decision surface, e.g. support vector machines
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/136Segmentation; Edge detection involving thresholding
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • G06V10/56Extraction of image or video features relating to colour

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Data Mining & Analysis (AREA)
  • Multimedia (AREA)
  • General Engineering & Computer Science (AREA)
  • Evolutionary Computation (AREA)
  • Evolutionary Biology (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Artificial Intelligence (AREA)
  • Health & Medical Sciences (AREA)
  • General Health & Medical Sciences (AREA)
  • Psychiatry (AREA)
  • Social Psychology (AREA)
  • Human Computer Interaction (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Image Analysis (AREA)

Abstract

The invention discloses a kind of gesture identification method blended based on skin-coloured regions segmentation and machine learning algorithm and its applications, include the following steps:After the acquisition and pretreatment to images of gestures, area of skin color is split using Otsu adaptive thresholding algorithm under YCbCr colour of skin space;It is partitioned into gesture by the way that gesture area decision condition is arranged after segmentation, Hu moment characteristics and finger tip number are extracted on gesture profile as feature vector;Then Classification and Identification is carried out to common 6 kinds of static gestures using SVM classifier.Gesture decision condition is arranged by the colour of skin in the present invention, gesture can be accurately positioned, and be partitioned into gesture;The gesture profile Hu moment characteristics and finger tip number of extraction provide more accurate feature vector for gesture classification, using mature SVM classifier to gesture Classification and Identification, ensure that gesture identification rate.

Description

A kind of gesture knowledge blended based on skin-coloured regions segmentation and machine learning algorithm Other method and its application
Technical field
The present invention relates to technical field of hand gesture recognition, and in particular to one kind is based on skin-coloured regions segmentation and machine learning The gesture identification method and its application that algorithm blends.
Background technique
With the rapid development of information technology, the status that human-computer interaction technology occupies in people's lives is more and more important. Nowadays in order to meet the needs of people's life, gesture identification is as a kind of natural, hommization man-machine interaction mode by increasingly Mostly use.Many work, such as Yang Xue text et al. have also been made in Gesture Recognition and combine gesture master by many scholars at present To gesture identification, solve gesture identification is influenced to ask by gesture rotation, Pan and Zoom for direction and class-Hausdorff distance Topic, but its experiment is only capable of carrying out under conditions of illumination is stable, miscellaneous point is less and interferes without face.Dardas et al. passes through extraction Graphical rule Invariance feature and vector quantization feature, then gesture is identified with feature packet and multi-class support vector machine, it identifies Effect is preferable;But since the computation complexity height of SIFT algorithm causes recognition speed slower, real-time is poor.Tao Mei equality people passes through Unsupervised is sparse from coding neural metwork training image fritter, extracts the edge feature of images of gestures as training classifier Input finally carries out tuning to the parameter of trained classifier to improve accuracy rate, but only can be real in the case where limiting background It is existing, it can not be identified under realistic background.
In practice, gesture is identified in order to improve discrimination usually using the mode or background letter etc. for limiting background, It cannot exclude the interference such as face, illumination, class colour of skin, so being unfavorable for natural human-computer interaction, currently, not having also under actual environment There is a mature gesture recognition system that can be widely used.Therefore, the interactive mode improved between people-machine has Important theory significance and more practical value.
Summary of the invention
It is a kind of with good stability, real-time in view of the deficiencies of the prior art, the present invention intends to provide Property and average recognition rate it is high based on skin-coloured regions segmentation and the gesture identification method that blends of machine learning algorithm and its Using.
To achieve the above object, the present invention provides following technical solutions:One kind is based on skin-coloured regions segmentation and machine The gesture identification method and its application that device learning algorithm blends, include the following steps:
(1) after by the acquisition and pretreatment to images of gestures, Otsu adaptive threshold is used under YCbCr colour of skin space Algorithm splits area of skin color;
(2) it is partitioned into gesture by the way that gesture area decision condition is arranged after dividing, Hu moment characteristics are extracted on gesture profile With finger tip number as feature vector;
(3) Classification and Identification then is carried out to common 6 kinds of static gestures using SVM classifier;
(4) a kind of that above-mentioned man-machine interaction method is applied in popular software, it is based under Webots simulated environment, by gesture Recognition result is converted into instruction, realizes that gesture emulates the real-time control of robot NAO.
Preferably, developing algorithm using under Webots environment, opened based on third party's computer vision library OpenCV Hair, OpenCV is transplanted under Webots simulated environment, using C language.
Step (1), including following sub-step:
(1.1) image preprocessing is carried out smooth and is sharpened using median filtering and Laplacian algorithm to image;
(1.2) in the gestures detection based on the colour of skin, the rgb color space of image is transformed into YCbCr color space;
(1.3) by converting image to after normalized the grayscale image of colour of skin similarity, Otsu dynamic is then selected Adaptive Thresholding is partitioned into area of skin color.Otsu threshold method is the mark by measuring maximum differential between target and two class of background Standard calculates the variance between them, the threshold value obtained when variance reaches maximum is as image segmentation threshold;
(1.4) it is directed to the area of skin color of binaryzation, sets gesture decision condition.
Step (1.4), including following sub-step:
(1.4.1) carries out analysis identification to the closed hand shape of edge contour, only considers the case where being installed with caftan;Scheming The area of skin color of human body is only had the face and hand as in, if some small area area of skin color after binaryzation, it is also possible to the class colour of skin Region accounts for whole picture area of pictural surface ratio lower than 0.02, is rejected;
(1.4.2) remaining area of skin color is only left face and hand, calculates the height and width of the colour of skin, meet [0.7, It 3.0] is then gesture area of skin color in range;
(1.4.3) to identify gesture, then must occur complete gesture shape in window picture, if the image colour of skin Region is connected with acquisition window, then is not processed, regardless of gesture, because can be to recognition result in incomplete situation It causes to judge by accident.
OpenCV function library is used above in image procossing, using library function cvFindContours by image outline from two It is retrieved in value image, when mode=CV_RETR_EXTERNAL search modes, indicates that the outmost profile of image is tested Rope comes out, and then draws gesture profile using function cvDrawContours.
In feature extraction step, to 7 Hu Contour moments of gesture contours extract and finger tip number.
Classifier of the support vector machines proposed using Vapnik as training sample and identification sample;Database is built It stands under 3 kinds of environment, by 5 experimenters, 6 kinds of common gestures, every kind of gesture is done 10 times, and totally 900 samples, half are used as training Sample set, the other half is as test sample collection.Every kind of gesture is used in fuzzy shake, different background, angle rotation and size contracting The lower 150 width images of gestures of a variety of situations such as put.
After establishing 6 kinds of gesture databases, gesture sample is trained and is classified respectively using SVM classifier, point Class process includes the following steps:
(a) sample set data format is converted first;
(b) scaling processing of sample data set;
(c) data set of training sample;
(d) classify to test sample.
Gesture identification is carried out using support vector machines and template matching method, 900 width images in experimental data base are carried out Test, the experimental result that two methods are obtained are compared, and obtain common 6 kinds of static gesture average recognition rates and average knowledge The other time.
Under Webots robot simulation environment, gesture is acquired and identified by camera, then by gesture identification result It is converted to instruction and sends robot NAO to, call the api function of NAO and make corresponding movement, before realizing gesture to robot Into, afterwards turn, turn left, turn right, sit down and stand up real-time control emulation.
It is an advantage of the invention that:Compared with prior art, gesture decision condition is arranged by the colour of skin in the present invention, can be accurate Gesture is positioned, and is partitioned into gesture;The gesture profile Hu moment characteristics and finger tip number of extraction provide more accurate for gesture classification Feature vector ensure that gesture identification rate using mature SVM classifier to gesture Classification and Identification.Experiment shows this method With good stability and real-time, gesture average recognition rate meet real-time up to 94%, and when being applied to robot control Property, so that man-machine interaction mode is more natural, true, also demonstrate the feasibility of Gesture Recognition Algorithm.The method of the present invention is to improvement And the interactive mode improved between people-machine has important theory significance and more practical value.
The invention will be further described with specific embodiment with reference to the accompanying drawings of the specification.
Detailed description of the invention
Fig. 1 is the gesture identification method of the embodiment of the present invention and its whole design flow chart of application;
Fig. 2 is the gesture area decision flowchart of the embodiment of the present invention.
Specific embodiment
Referring to Fig. 1 and Fig. 2, one kind disclosed by the invention is blended based on skin-coloured regions segmentation and machine learning algorithm Gesture identification method and its application, include the following steps:
(1) after by the acquisition and pretreatment to images of gestures, Otsu adaptive threshold is used under YCbCr colour of skin space Algorithm splits area of skin color;
(2) it is partitioned into gesture by the way that gesture area decision condition is arranged after dividing, Hu moment characteristics are extracted on gesture profile With finger tip number as feature vector;
(3) Classification and Identification then is carried out to common 6 kinds of static gestures using SVM classifier;
(4) based under Webots simulated environment, instruction is converted by gesture identification result, realizes gesture to robot NAO Real-time control emulation.
The present invention using developing algorithm under Webots environment, in order to enable the algorithm developed have transplantability it is strong, Development cycle short and high-efficient feature, inventive algorithm is developed based on third party's computer vision library OpenCV, will OpenCV is transplanted under Webots simulated environment, using C language.
Images of gestures acquisition captures 6 kinds of images of gestures of experimenter by external camera.
Image preprocessing carries out smooth and sharpens, removes noise using median filtering and Laplacian algorithm to image The detailed information of image is preferably maintained simultaneously.
In gestures detection based on the colour of skin, common tri- kinds of spaces the colour of skin space RGB, HSV and YCbCr.By testing table Bright, the coloration of YCbCr color space and brightness are separated from each other, area of skin color Cb and Cr Clustering Effect at different brightnesses Preferably, therefore two components of Cr and Cb can be detected, and threshold value selection is carried out to the two components.YCbCr and RGB face It is linear transformation relationship between the colour space, calculation amount is small and effect is good, as shown in formula (1):
By operation statistics it is found that dimensional Gaussian is being presented just in YCbCr color space in Cr the and Cb chromatic component of the colour of skin State distribution analyzes and counts the colour of skin similarity of image slices vegetarian refreshments using formula (2) according to its two-dimentional Gaussian distribution feature. Calculating formula of similarity is:
P (Cr, Cb)=exp [- 0.5 (x-m)TC-1(x-m)] (2)
Wherein:M=E (x) is colour of skin sample Cr and Cb chromatic component mean value;X=(Cr, Cb)TIt is skin pixel Cr and Cb Chromatic component value;C=E [(x-m) (x-m)T] it is colour of skin component covariance matrix.
By converting image to after normalized the grayscale image of colour of skin similarity, Otsu dynamic self-adapting is then selected Threshold method is partitioned into area of skin color.Otsu threshold method is the standard by measuring maximum differential between target and two class of background, i.e., The variance between them is calculated, the threshold value obtained when variance reaches maximum is as image segmentation threshold.When class varianceIt is maximum When, its corresponding optimal threshold as shown in formula (3) is:
Although can realize accurate segmentation to area of skin color by Otsu Threshold Segmentation Algorithm, occur in image Face and class colour of skin object still remain, so to try to get rid of to non-hand region.For the area of skin color of binaryzation, setting Gesture decision condition:
(1) present invention carries out analysis identification to the closed hand shape of edge contour, only considers the case where being installed with caftan.Cause This, the area of skin color of human body is only had the face in the picture and hand, many experiments obtain, if some small area colours of skin after binaryzation Region (contArea), it is also possible to which class area of skin color accounts for whole picture figure (imgArea) area ratio lower than 0.02, then these areas Domain is not manpower or face, is rejected.
(2) remaining area of skin color is only left face and hand, calculates the height (height) and width (width) ratio of the colour of skin K is then gesture area of skin color if met in [0.7,3.0] range.
(3) because to identify gesture, then must occur complete gesture shape in window picture, if the image colour of skin Region is connected with acquisition window, then is not processed, regardless of gesture, because can be to recognition result in incomplete situation It causes to judge by accident.
In gesture integrality aspect, " cavity " is effectively filled up by Morphological scale-space and rejects burr, the present invention uses First closed operation again opening operation method to image carry out morphological transformation.
After obtaining complete gesture bianry image, next need to find suitable accurate feature to describe gesture, Calculation amount is small as far as possible simultaneously, therefore present invention selection handles the profile of gesture area in image, i.e. extraction object Shape.
OpenCV function library is used above in image procossing in the present invention, using library function cvFindContours by image wheel Exterior feature is retrieved from bianry image, when mode=CV_RETR_EXTERNAL (search modes), indicates that image is outmost Profile is retrieved, and then draws gesture profile using function cvDrawContours.
In feature extraction step, to 7 Hu Contour moments of gesture contours extract and finger tip number.
(1) Hu Contour moment
Bending moment does not have the characteristics that image translation, rotation, constant rate to Hu, and relative to common outline square, the present invention Only calculate the shape based moment of gesture, it is possible to reduce calculate the time and reduce the space of storing data.
(2) finger tip detection
It is obtained by the experiment of multiple finger tip detection:It is available accurate using the method for finger tip curvature and gesture convex hull set Finger tip number.Finger tip curvature is the biggish position of profile variations, and profile convex closure is that the minimum on whole vertex on gesture profile is external Convex polygon, its vertex are the point on profile.
All finger tip candidate points first have been determined with curvature algorithm, then have found hand profile convex closure with algorithm of convex hull, then will be convex Packet vertex obtains finger point compared with candidate point, counts finger tip number.
According to above-mentioned analysis, the feature vector for gesture identification is by seven Hu moment characteristics of Hu1~Hu7 and finger tip number Num is constituted, and is expressed as (Hu1, Hu2, Hu3, Hu4, Hu5, Hu6, Hu7, Num).
Gesture identification of the invention is limited in sample collection, needs one to carry out Accurate classification to finite sample Classifier, therefore the support vector machines (SVM) that the present invention is proposed using Vapnik is as training sample and the classification for identifying sample Device.
Database of the present invention is under 3 kinds of environment, and by 5 experimenters, 6 kinds of common gestures, every kind of gesture is done 10 times, altogether 900 samples, half is as training sample set, the other half is as test sample collection.Every kind of gesture is used in fuzzy shake, difference The lower 150 width images of gestures of a variety of situations such as background, angle rotation and scaled.Due to length, following present gestures 5 ten The feature vector value obtained in the case of kind, as shown in table 1.
Table 1:(5 feature vector value of gesture)
After establishing 6 kinds of gesture databases, gesture sample is trained and is classified respectively using SVM classifier, The specifically used step of Libsvm software package is as follows in assorting process:
(a) sample set data format is converted first.The data feature values of 450 groups of training samples are stored in train_ In hand.txt;The characteristic value of remaining 450 groups of samples to be tested is stored in test_hand.txt.
(b) scaling processing of sample data set.Processing is zoomed in and out to data using svmscale function, so that characteristic value Range is unified.
(c) data set of training sample.Sample data is trained by svmtrain function, optimized parameter is obtained and goes Training classifier, training pattern hand.modle.
(d) classify to test sample.By svmpredict function and training pattern to test sample collection (test.txt) Carry out Classification and Identification.
Support vector machines is respectively adopted in experiment and template matching method carries out gesture identification, to 900 width in experimental data base Image is tested, and the experimental result that two methods are obtained is compared, and obtains common 6 kinds of static gesture average recognition rates With average recognition time.
Table 2:(6 kinds of gesture identification rates)
Table 3:(comparison of two methods experimental result)
Each gesture identification rate of recognition methods as can be seen from Table 2 based on support vector machines is apparently higher than template matching method, And meet discrimination requirement.Wherein 4 discrimination of gesture is slightly lower, show that most of sample accidentally surveyed all is known by experimental analysis Not at gesture 3 and gesture 5, this is because the Hu moment characteristics value of gesture 4 and gesture 3,5 in some cases is close, difference phase To smaller, cause accidentally to survey.Furthermore in collecting sample, the complexity of the standard degree of gesture sample and gesture background is also to cause to miss The reason of survey.From table 3 it is observed that can to guarantee that experimental result has higher for the Gesture Recognition Algorithm based on support vector machines Discrimination is also able to satisfy requirement of real-time although operational efficiency is lower compared with template matching, it is ensured that the Shandong of gesture recognition system Stick and stability.
Under Webots robot simulation environment, gesture is acquired and identified by camera, then by gesture identification result It is converted to instruction and sends robot NAO (present invention has selected more classical NAO robot model) to, call the correlation of NAO Api function simultaneously makes corresponding movement, and it is real-time to realize that gesture turns, turns left, turns right, sits down and stands up to robot advance, afterwards Control emulation.The present frame that the present invention acquires camera makees static gesture identification.
Emulation " world " is equal to real world, it is specified that setting certain step number when doing the gesture instruction of " advance " (it is 16 steps that maximum step number, which is arranged, in the present invention), is then automatically stopped, then execute next instruction;In sending " sitting down " instruction, Next " standing up " instruction can only be issued, not so system will not receive other gesture instructions, thus automatic shield.Between gesture The time interval of variation is set as 3s, and the images of gestures of system acquisition will be not processed within the 3s time.This ensures that i.e. Experimenter is set to have done some gesture instructions in error, system can give to shield during identifying and executing instruction, in order to avoid jammer Device people works normally.Instruction conversion is as shown in table 4.
Table 4:(gesture instruction conversion table)
NAO robot control design of Simulation based on gesture identification is as follows:
(1) NAO is controlled in Webots, is first turned on controller, then in text box editor control program, designs main journey It should be initialized before sequence, respective function:
wb_robot_init();
(2) start body and load " movement " file.
Body driving function:
Find_and_enable_devices();
Loading action documentation function:
load_motion_files();
(3) when receiving gesture coding 1, advance cycle function is called, until receiving next gesture instruction or completion It stops after maximum 16 step of step number.
Advance cycle function is:
wbu_motion_set_loop(forwards,true);
wbu_motion_play(forwards);
(4) when NAO receives next gesture instruction, that is, change current kinetic, then it needs to stop current action again It goes to execute new element.
Interrupt current action:
wbu_motion_stop(currently_playing);
Execute new element:(parameter motion indicates new element instruction).
wbu_motion_play(motion);
Currently_playing=motion;
(5) it is turned round in design in NAO:The angle parameter of left/right rotation is set as 90 degree, the angle parameter to turn round is set as 180 Degree.Such as:
It turns right to the left expression:
start_motion(turn_left_90);
start_motion(turn_right_90);
In the speed that robot receives instruction reaction, its response time complexity is calculated, obtains system to six kinds of gestures The average response time of order has good real-time, can satisfy gesture to robot real-time control in 65ms or so It is required that.
For emphasis of the present invention to studying in the key technology of Hand Gesture Segmentation and identification, detailed content mainly includes gesture inspection Survey cutting techniques, feature extraction, gesture identification and gesture control robot.Finally it can guarantee algorithm in gesture identification experiment Stability and higher discrimination, and gesture also meets good real-time to the control of robot emulation.
Experiment, which is completed, to be acquired gesture by camera and emulates to robot NAO real-time control, in next step will be calculation Method is implanted into true robot NAO, replaces external application camera with robot " eyes " (camera), allows robot autonomous acquisition Images of gestures simultaneously identifies, finally generates corresponding movement, and specific tasks are arranged, robot is allowed to complete.
Gesture decision condition is arranged by the colour of skin in the present invention, gesture can be accurately positioned, and be partitioned into gesture;It extracts Gesture profile Hu moment characteristics and finger tip number provide more accurate feature vector for gesture classification, utilize mature SVM classifier To gesture Classification and Identification, gesture identification rate ensure that.Experiment shows that this method is with good stability and real-time, gesture are flat Equal discrimination meets real-time up to 94%, and when being applied to robot control, so that man-machine interaction mode is more natural, true, Also the feasibility of Gesture Recognition Algorithm is demonstrated.The method of the present invention has weight to the interactive mode improved between people-machine The theory significance and more practical value wanted.
Above-described embodiment is served only for that invention is further explained to specific descriptions of the invention, should not be understood as Limiting the scope of the present invention, the technician of this field make the present invention according to the content of foregoing invention some non- The modifications and adaptations of essence are fallen within the scope of protection of the present invention.

Claims (10)

1. a kind of gesture identification method blended based on skin-coloured regions segmentation and machine learning algorithm and its application, special Sign is:Include the following steps:
(1) after by the acquisition and pretreatment to images of gestures, Otsu adaptive thresholding algorithm is used under YCbCr colour of skin space Area of skin color is split;
(2) it is partitioned into gesture by the way that gesture area decision condition is arranged after dividing, Hu moment characteristics are extracted on gesture profile and is referred to Sharp number is as feature vector;
(3) Classification and Identification then is carried out to common 6 kinds of static gestures using SVM classifier;
(4) based under Webots simulated environment, instruction is converted by gesture identification result, realizes gesture to the reality of robot NAO When control emulation.
2. a kind of gesture blended based on skin-coloured regions segmentation and machine learning algorithm according to claim 1 is known Other method and its application, it is characterised in that:Algorithm is developed using under Webots environment, is based on third party's computer vision library OpenCV exploitation, OpenCV is transplanted under Webots simulated environment, using C language.
3. a kind of gesture blended based on skin-coloured regions segmentation and machine learning algorithm according to claim 1 is known Other method and its application, it is characterised in that:Step (1), including following sub-step:
(1.1) image preprocessing is carried out smooth and is sharpened using median filtering and Laplacian algorithm to image;
(1.2) in the gestures detection based on the colour of skin, the rgb color space of image is transformed into YCbCr color space;
(1.3) by converting image to after normalized the grayscale image of colour of skin similarity, then select Otsu dynamic adaptive Threshold method is answered to be partitioned into area of skin color.Otsu threshold method is the standard by measuring maximum differential between target and two class of background, The variance between them is calculated, the threshold value obtained when variance reaches maximum is as image segmentation threshold;
(1.4) it is directed to the area of skin color of binaryzation, sets gesture decision condition.
4. a kind of gesture blended based on skin-coloured regions segmentation and machine learning algorithm according to claim 3 is known Other method and its application, it is characterised in that:Step (1.4), including following sub-step:
(1.4.1) carries out analysis identification to the closed hand shape of edge contour, only considers the case where being installed with caftan;In the picture The area of skin color of human body is only had the face and hand, if some small area area of skin color after binaryzation, it is also possible to class area of skin color, Whole picture area of pictural surface ratio is accounted for lower than 0.02, is rejected;
(1.4.2) remaining area of skin color is only left face and hand, calculates the height and width of the colour of skin, meets in [0.7,3.0] It is then gesture area of skin color in range;
(1.4.3) to identify gesture, then must occur complete gesture shape in window picture, if image area of skin color It is connected with acquisition window, is then not processed, regardless of gesture, because can be caused to recognition result in incomplete situation Erroneous judgement.
5. a kind of gesture blended based on skin-coloured regions segmentation and machine learning algorithm according to claim 1 is known Other method and its application, it is characterised in that:OpenCV function library is used above in image procossing, uses library function Image outline is retrieved by cvFindContours from bianry image, when mode=CV_RETR_EXTERNAL search modes When, it indicates that the outmost profile of image is retrieved, then draws gesture profile using function cvDrawContours.
6. a kind of gesture blended based on skin-coloured regions segmentation and machine learning algorithm according to claim 1 is known Other method and its application, it is characterised in that:In feature extraction step, to 7 Hu Contour moments of gesture contours extract and finger tip number.
7. a kind of gesture blended based on skin-coloured regions segmentation and machine learning algorithm according to claim 1 is known Other method and its application, it is characterised in that:The support vector machines proposed using Vapnik are as training sample and identification sample Classifier;Database is under 3 kinds of environment, and by 5 experimenters, 6 kinds of common gestures, every kind of gesture is done 10 times, 900 totally Sample, half is as training sample set, the other half is as test sample collection.Every kind of gesture use fuzzy shake, different background, The lower 150 width images of gestures of a variety of situations such as angle rotation and scaled.
8. a kind of gesture blended based on skin-coloured regions segmentation and machine learning algorithm according to claim 7 is known Other method and its application, it is characterised in that:After establishing 6 kinds of gesture databases, using SVM classifier respectively to gesture sample into Row training and classification, assorting process include the following steps:
(a) sample set data format is converted first;
(b) scaling processing of sample data set;
(c) data set of training sample;
(d) classify to test sample.
9. a kind of gesture blended based on skin-coloured regions segmentation and machine learning algorithm according to claim 8 is known Other method and its application, it is characterised in that:Gesture identification is carried out using support vector machines and template matching method, to experimental data base In 900 width images tested, the experimental result that two methods are obtained is compared, and obtains common 6 kinds of static gestures Average recognition rate and average recognition time.
10. a kind of gesture blended based on skin-coloured regions segmentation and machine learning algorithm according to claim 9 Recognition methods and its application, it is characterised in that:Under Webots robot simulation environment, is acquired by camera and identify hand Then gesture identification result is converted to instruction and sends robot NAO to by gesture, call the api function of NAO and make corresponding dynamic Make, realizes the real-time control emulation that gesture advances to robot, turns, turns left, turns right, sits down and stand up afterwards.
CN201810608459.8A 2018-06-13 2018-06-13 It is a kind of to divide the gesture identification method blended with machine learning algorithm and its application based on skin-coloured regions Pending CN108846359A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201810608459.8A CN108846359A (en) 2018-06-13 2018-06-13 It is a kind of to divide the gesture identification method blended with machine learning algorithm and its application based on skin-coloured regions

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201810608459.8A CN108846359A (en) 2018-06-13 2018-06-13 It is a kind of to divide the gesture identification method blended with machine learning algorithm and its application based on skin-coloured regions

Publications (1)

Publication Number Publication Date
CN108846359A true CN108846359A (en) 2018-11-20

Family

ID=64201933

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201810608459.8A Pending CN108846359A (en) 2018-06-13 2018-06-13 It is a kind of to divide the gesture identification method blended with machine learning algorithm and its application based on skin-coloured regions

Country Status (1)

Country Link
CN (1) CN108846359A (en)

Cited By (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109684959A (en) * 2018-12-14 2019-04-26 武汉大学 The recognition methods of video gesture based on Face Detection and deep learning and device
CN109766822A (en) * 2019-01-07 2019-05-17 山东大学 Gesture identification method neural network based and system
CN109961010A (en) * 2019-02-16 2019-07-02 天津大学 A kind of gesture identification method based on intelligent robot
CN110197138A (en) * 2019-05-15 2019-09-03 南京极目大数据技术有限公司 A kind of quick gesture identification method based on video frame feature
CN110796033A (en) * 2019-10-12 2020-02-14 江苏科技大学 Static gesture recognition method based on bounding box model
CN111723698A (en) * 2020-06-05 2020-09-29 中南民族大学 Method and equipment for controlling lamplight based on gestures
CN112068705A (en) * 2020-09-15 2020-12-11 山东建筑大学 Bionic robot fish interaction control method and system based on gesture recognition
CN112101208A (en) * 2020-09-15 2020-12-18 江苏慧明智能科技有限公司 Feature series fusion gesture recognition method and device for elderly people
CN115111964A (en) * 2022-06-02 2022-09-27 中国人民解放军东部战区总医院 MR holographic intelligent helmet for individual training
CN115969511A (en) * 2023-02-14 2023-04-18 杭州由莱科技有限公司 Depilatory instrument control method, device and equipment based on identity recognition and storage medium

Cited By (15)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109684959A (en) * 2018-12-14 2019-04-26 武汉大学 The recognition methods of video gesture based on Face Detection and deep learning and device
CN109684959B (en) * 2018-12-14 2021-08-03 武汉大学 Video gesture recognition method and device based on skin color detection and deep learning
CN109766822B (en) * 2019-01-07 2021-02-05 山东大学 Gesture recognition method and system based on neural network
CN109766822A (en) * 2019-01-07 2019-05-17 山东大学 Gesture identification method neural network based and system
CN109961010A (en) * 2019-02-16 2019-07-02 天津大学 A kind of gesture identification method based on intelligent robot
CN110197138A (en) * 2019-05-15 2019-09-03 南京极目大数据技术有限公司 A kind of quick gesture identification method based on video frame feature
CN110197138B (en) * 2019-05-15 2020-02-04 南京极目大数据技术有限公司 Rapid gesture recognition method based on video frame characteristics
CN110796033A (en) * 2019-10-12 2020-02-14 江苏科技大学 Static gesture recognition method based on bounding box model
CN110796033B (en) * 2019-10-12 2023-07-28 江苏科技大学 Static gesture recognition method based on bounding box model
CN111723698A (en) * 2020-06-05 2020-09-29 中南民族大学 Method and equipment for controlling lamplight based on gestures
CN112068705A (en) * 2020-09-15 2020-12-11 山东建筑大学 Bionic robot fish interaction control method and system based on gesture recognition
CN112101208A (en) * 2020-09-15 2020-12-18 江苏慧明智能科技有限公司 Feature series fusion gesture recognition method and device for elderly people
CN115111964A (en) * 2022-06-02 2022-09-27 中国人民解放军东部战区总医院 MR holographic intelligent helmet for individual training
CN115969511A (en) * 2023-02-14 2023-04-18 杭州由莱科技有限公司 Depilatory instrument control method, device and equipment based on identity recognition and storage medium
CN115969511B (en) * 2023-02-14 2023-05-30 杭州由莱科技有限公司 Dehairing instrument control method, device, equipment and storage medium based on identity recognition

Similar Documents

Publication Publication Date Title
CN108846359A (en) It is a kind of to divide the gesture identification method blended with machine learning algorithm and its application based on skin-coloured regions
Khan et al. Hand gesture recognition: a literature review
Song et al. Tracking body and hands for gesture recognition: Natops aircraft handling signals database
Yun et al. An automatic hand gesture recognition system based on Viola-Jones method and SVMs
Gurav et al. Real time finger tracking and contour detection for gesture recognition using OpenCV
CN102831404B (en) Gesture detecting method and system
Agrawal et al. Recognition of Indian Sign Language using feature fusion
CN109684959B (en) Video gesture recognition method and device based on skin color detection and deep learning
CN105975934B (en) Dynamic gesture recognition method and system for augmented reality auxiliary maintenance
CN103971102A (en) Static gesture recognition method based on finger contour and decision-making trees
CN102194108A (en) Smiley face expression recognition method based on clustering linear discriminant analysis of feature selection
CN104361313A (en) Gesture recognition method based on multi-kernel learning heterogeneous feature fusion
Vishwakarma et al. Simple and intelligent system to recognize the expression of speech-disabled person
Mahmood et al. A Comparative study of a new hand recognition model based on line of features and other techniques
CN109086772A (en) A kind of recognition methods and system distorting adhesion character picture validation code
CN108614988A (en) A kind of motion gesture automatic recognition system under complex background
CN109871792A (en) Pedestrian detection method and device
KR20120089948A (en) Real-time gesture recognition using mhi shape information
CN103077383B (en) Based on the human motion identification method of the Divisional of spatio-temporal gradient feature
Agrawal et al. A Tutor for the hearing impaired (developed using Automatic Gesture Recognition)
CN118230354A (en) Sign language recognition method based on improvement YOLOv under complex scene
CN108108648A (en) A kind of new gesture recognition system device and method
Nagashree et al. Hand gesture recognition using support vector machine
Heer et al. An improved hand gesture recognition system based on optimized msvm and sift feature extraction algorithm
CN109961010A (en) A kind of gesture identification method based on intelligent robot

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
WD01 Invention patent application deemed withdrawn after publication
WD01 Invention patent application deemed withdrawn after publication

Application publication date: 20181120