CN108460340A - A kind of gait recognition method based on the dense convolutional neural networks of 3D - Google Patents
A kind of gait recognition method based on the dense convolutional neural networks of 3D Download PDFInfo
- Publication number
- CN108460340A CN108460340A CN201810113101.8A CN201810113101A CN108460340A CN 108460340 A CN108460340 A CN 108460340A CN 201810113101 A CN201810113101 A CN 201810113101A CN 108460340 A CN108460340 A CN 108460340A
- Authority
- CN
- China
- Prior art keywords
- neural networks
- convolutional neural
- training
- video
- image
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V40/00—Recognition of biometric, human-related or animal-related patterns in image or video data
- G06V40/20—Movements or behaviour, e.g. gesture recognition
- G06V40/23—Recognition of whole body movements, e.g. for sport training
- G06V40/25—Recognition of walking or running movements, e.g. gait recognition
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/04—Architecture, e.g. interconnection topology
- G06N3/045—Combinations of networks
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/20—Image preprocessing
- G06V10/30—Noise filtering
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V40/00—Recognition of biometric, human-related or animal-related patterns in image or video data
- G06V40/10—Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
- G06V40/103—Static body considered as a whole, e.g. static pedestrian or occupant recognition
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Multimedia (AREA)
- General Health & Medical Sciences (AREA)
- Health & Medical Sciences (AREA)
- Human Computer Interaction (AREA)
- Biophysics (AREA)
- Mathematical Physics (AREA)
- Data Mining & Analysis (AREA)
- Molecular Biology (AREA)
- Computing Systems (AREA)
- General Engineering & Computer Science (AREA)
- Computational Linguistics (AREA)
- Evolutionary Computation (AREA)
- Software Systems (AREA)
- Biomedical Technology (AREA)
- Artificial Intelligence (AREA)
- Life Sciences & Earth Sciences (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Psychiatry (AREA)
- Social Psychology (AREA)
- Image Analysis (AREA)
Abstract
The invention discloses a kind of gait recognition method based on the dense convolutional neural networks of 3D, the network in this method extracts transform characteristics of the gait on time dimension using 3D convolution, while possessing the feature reserve capability of DenseNet structures.What the present invention trained superior performance in the case where the shallower training sample of network depth is less can be according to the Classification and Identification model of Gait Recognition its identity in video.By being tested on the Dataset A in CASIA gait datas library, prove that this method can be in the insufficient Gait Recognition model for training practicality of training sample, and it is fast with training speed, model parameter is few, the high advantage of discrimination, and all there is considerable recognition capability in single visual angle or across visual angle.
Description
Technical field
The present invention relates to computer visions and area of pattern recognition, more particularly to a kind of to be based on the dense convolutional neural networks of 3D
Gait recognition method.
Background technology
Gait Recognition is compared with other biological identification technology (such as fingerprint, iris, face, palmmprint etc.) has non-infringement
Property, it is untouchable, it is easy to perceive, it is difficult to hide, it is difficult to which the advantages such as camouflage obtain widely in terms of intelligent Video Surveillance Technology
Concern and research.
Gait recognition method is generally divided into two classes:Technology based on model and based on appearance.In the former, to predefined
The parameter of model is adjusted, and the latter extracts manual gait feature from image or video.Gait Recognition based on model
Method is established and the computation complexity of parameter Estimation is higher, and data storage capacity is big, and real-time is not high.Method based on appearance is main
Morphological feature when extracting people's walking from the video sequence of shooting is laid particular emphasis on, it need not be to whole compared with the method based on model
Some part modeling of a human body or human body, it is insensitive to each angle shadow figure of human body and computation complexity is relatively low.In recent years
Method based on appearance is to carry out the main method of Research on Gait Recognition.
In recent years, deep learning method is shown in terms of the robust features of extraction picture compared with commonsense method significant excellent
More property.For example, convolutional neural networks (CNN) can automatically learn the feature for having discrimination from given training image, from
And significantly improve image classification accuracy.
But but there are two urgent problems to be solved in terms of CNN is used in Gait Recognition:1. Gait Recognition is based on figure
Piece sequence, gait feature be from extract extracted in continuous video frame rather than picture.2. in order to learn to enough features,
CNN needs to provide a large amount of training data for all categories.The theory that CNN does image classification can be by increasing in convolution operation
Add a time dimension to be applied in visual classification, the CNN of 3D convolution can be used to solve the problems, such as 1.A kind of novel network knot
The dense network of structure-alleviates gradient disappearance problem by feature reuse, enhances feature propagation, greatly reduce parameter
Quantity can train the preferable model of effect with less training dataset.The method that can be multiplexed with the structure and frame of dense network
To alleviate problem 2.
Invention content
To solve complicated (the Gait Recognition needs such as based on GEI of present gait Recognition technology video data pre-treatment step
Including pedestrian contour extract, gait cycle detection, GEI generate etc. processing procedures) and identification when especially in the item across visual angle
The not high problem of precision under part.
The technical solution adopted by the present invention is a kind of gait recognition method based on the dense convolutional neural networks of 3D.This method
Including data prediction, model training identifies three processes, specific as follows.
Step S1, process of data preprocessing;
Step S1.1, pedestrian contour extraction;
It is modeled first with the picture figure viewed from behind containing only background, then directly extracts each frame middle row of video using background subtraction method
The binaryzation contour images of people.
Step S1.2, noise processed;
The binaryzation contour images of the obtained pedestrians of step S1.1 are disappeared using the method for Morphological scale-space in digital picture
Except the noise in image, and the missing of pixel position in moving target is filled up, to keep image more smooth, to obtain by noise
Treated best pedestrian contour image.
Step S1.3 extracts the boundary rectangle of pedestrian contour image;
BoundingBox is extracted from the pedestrian contour image that step S1.2 is obtained, wherein area is maximum
The boundary rectangle image of BoundingBox, that is, pedestrian contour.
Step S1.4, picture size normalization, centralization;
The boundary rectangle image for the pedestrian contour that process step S1.3 processing is obtained pedestrian contour in not changing image
In the case of shape, it is normalized to that size is identical and the image of all frame middle row people profiles alignment.
Step S1.5 obtains training sample;
Continuous N frames are a sample, sample label in the sequence of frames of video that step of learning from else's experience S1.1 is obtained to S1.4 processing
It is an integer between 16 to 32 for pedestrian ID in the sequence of frames of video, N.
Step S2, training process;
Step S1 is obtained training sample and corresponding ID inputs the dense convolutional neural networks of 3D by step S2.1.Extraction instruction
Practice the further feature of sequence of frames of video in sample.
Step S2.2, it is each ID that the further feature learnt using step S2.1 obtains sample classification through logistic regression again
Estimated probability.
Step S2.3 calculates true ID and predicts the error of classification results, and optimizes above-mentioned based on the dense convolutional Neurals of 3D
The disaggregated model of network.
Step S2.4 repeats step S2.1 to step S2.3 until the above-mentioned classification mould based on the dense convolutional neural networks of 3D
Type restrains.
Step S3, identification process;
Step S3.1, video sequence to be identified obtain at least one test sample through step S1 processing.
Step S3.2, the disaggregated model that test sample is inputted to the dense convolutional neural networks of trained 3D are obtained each
Prediction probability on ID.
Step S3.3 calculates the sum of prediction probability of the test sample for including in video sequence to be identified on each ID.
Step S3.4, the maximum prediction calculated through step S3.3 is generally and corresponding ID is after gait recognition method
Obtained identification identity.
The training samples number of each ID should be identical as possible.
Multiple video sequences of each ID of training include multiple visual angles.
The training samples number of each ID different visual angles is identical.
Video sequence to be identified obtains 3 to 5 samples through step S1 processing, weights all identification samples and integrally identifies knot
Fruit.
The present invention constructs the Classification and Identification model based on convolutional neural networks, passes through the gait video comprising multiple visual angles
Sequence practices the model so that model has the ability of across visual angle identification gait.The model can directly acceptance test sample obtain
Classification results.3D convolution operations and using for outstanding network structure make model have the ability for extracting gait feature well.
DataSetA of the side in CASIA gait datas library of the present invention obtains higher accuracy of identification, better than in the recent period other in the data
The method tested is done on collection.
Description of the drawings
Fig. 1 is the identification algorithm flow schematic diagram according to the present invention based on gait.
Fig. 2 is untreated video sequence image according to the present invention.
Fig. 3 is the video sequence frame image according to the present invention by step S1.1 processing;
Fig. 4 is the video sequence frame image according to the present invention by step S1.2 processing;
Fig. 5 is the video sequence frame image according to the present invention by step S1.3 processing;
Fig. 6 is the video sequence frame image according to the present invention by step S1.4 processing;
Fig. 7 is the network structure of the dense convolutional neural networks of 3D according to the present invention;
Specific implementation mode
To make the purpose of the present invention, technical solution and advantage be more clearly understood, below in conjunction with specific embodiment, and reference
Attached drawing does detailed description further to the present invention.
The block schematic illustration of method involved in the present invention is as shown in Figure 1, include the following steps:
Step S1, video sequence pretreatment;
Each frame of the video sequence of several pedestrians marked is done into same treatment, processing includes following step
Suddenly:
Step S1.1 extracts pedestrian's binaryzation profile in video image using movement detection method ViBe.ViBe is the back of the body
The characteristics of one kind of scape modeling method has detection in real time, and dynamic updates background.Algorithm is not required to be trained in advance with entire video-frequency band
Go out background, but directly take preceding several frame composition background sampled points in video-frequency band, and uses random side in the process of running
Method updates background sample point.(such as light, ripples, tree shade etc.) the still moving object contours with robust when background dynamics change
Extractability.Fig. 2 is artwork, and Fig. 3 is the pedestrian's bianry image extracted through ViBe.
Morphological operator can be used to eliminate binary map there are noise spot in the image extracted as shown in Figure 3 in step S1.2
Noise as in simultaneously fills up the missing of pixel position in moving target, to keep image more smooth, is taken turns with obtaining best pedestrian
Wide image.Image effect such as Fig. 4 after processing.
Step S1.3, although Fig. 4 eliminates noise spot after noise reduction process and need to be carried to reduce background garbage
Take the boundary rectangle image of pedestrian contour.First from step S1.2 to pedestrian contour image in extract BoundingBox, wherein
The maximum BoundingBox of area is just the boundary rectangle image of pedestrian contour, such as Fig. 5.Obtain the boundary rectangle of pedestrian contour
In order to be suitble to do CNN inputs after image, equal proportion normalization has been carried out to image.In order to make the better extraction time sequence of 3D convolution
Information in dimension does centralization processing and registration process to image again, effect that treated such as Fig. 6.By pedestrian in specific experiment
Profile elevations h is fixed as P pixel position, width equal proportion scaling.Picture traverse also completion is P by fixing profile vertical central axis line
Pixel obtains the video sequence frame image of P*P.
Step S1.4 takes and constitutes an input sample in sequence of frames of video per continuous s frames, and the label of sample is pedestrian's
The mark of video sequence.So far the pretreatment work that input is completed obtains training the instruction of the tape label needed for deep learning model
Practice sample.
Step S2, disaggregated model of the training based on the dense convolutional neural networks of 3D;
Schematic network structure is as shown in Figure 7.It specifically comprises:One 3D convolutional layers L1。L1The output of layer enters
Block1。Block1Layer is constituted by multiple conv layers, and each conv layers includes the dropout before 3D convolution operations and convolution operation
Operation, relu activation primitives, batch standardization (BN) etc..Wherein dropout energy effectively preventing over-fittings, the effect of relu functions
The non-linear relation being the increase between each layer of neural network, the most important effect of BN operations are to reduce gradient to disappear, and accelerate to receive
Hold back speed.Block1In conv layers use dense connection.Block1The output of layer enters BlockNorm1Layer.BlockNorm1
Layer main function is regularization Block1The output of layer.The output of this layer enters Pooling1Layer.Pooling1Layer uses average pond
Change and integrates the characteristic point in small neighbourhood to obtain new feature and to inputting dimensionality reduction.Pooling1Output then followed by
Block2, Pooling2, Block3, Pooling3, Block4, Pooling4.Wherein Blocki(i=2,3,4) operation is same
Block1。PoolingiPass through a BN in (i=2,3,4) successively to operate, a relu activation primitive, a pondization operation.
Pooling4Output is reconstructed into after one-dimensional vector by two full articulamentum FC1, FC2.Followed by one after each full binder couse
Regularization operates and a relu activation primitive.Softmax function call classification results to the end are passed through in the output of FC2.
Select learning rate training pattern appropriate to restraining, if this can be identified by gait sequence of frames of video by finally obtaining
The convolutional neural networks Classification and Identification model of dry people.
Step S3, identification process;
Video sequence to be identified is obtained sequence samples to be identified by step S3.1 by the processing of step S1.
Step S3.2 takes a are used as of the t (generally taking the integer between 3 to 5) in sample to be identified to be obtained by step S2 at random
To the input of network model.Obtain t estimation output.
Step S3.3, because the output of model is estimated probability of the sample classification to each label in step S3.2.Meter
Calculate the sum of the corresponding t estimated probability of each label.
The maximum estimated probability that step S3.4, step S3.3 are calculated sums it up corresponding label as final identification knot
Fruit.
DataSetA of the method for the present invention in CASIA gait datas library is verified.Database shares 20 people, each
People has 12 image sequences, and 3 direction of travel (with the plane of delineation respectively at 0 degree, 45 degree, 90 degree), there are 4 images in each direction
Sequence.The length of each sequence is different with the velocity variations that people walks, the frame number of each sequence between 37 to 127 it
Between.Entire database includes 13139 sub-pictures.Verification process and result are as follows:
The learning rate used when training is that 0-2000 step 0.1,2000-4000 steps are that 0.019,4000-8000 steps are
0.001.The model size trained is about 22.4.0M, and model includes mainly network structure and network parameter.Training duration be about
3.50h.Average value when data above is repeatedly training.
The sample number of angle extraction has differences in data set, for utilization sample as much as possible and ensures angle reality
Training set is all used when the consistency experiment tested:Test set=2:1 ratio is tested, experimental result it is following (5 experiments
Average value):
Angle | Training sample number | Test sample number | Accurately identify rate |
0° | 2295 | 1148 | 97.82% |
45° | 4400 | 2201 | 95.50% |
90° | 3660 | 1830 | 97.45% |
It amounts to | 10355 | 5179 | 96.92% |
The discrimination (training pace is 2000 steps, and other parameters are same as above) of 3 angle mixing situation drags, it is as follows:
Angle | Training sample number | Test sample number | Correct identification number | Accurately identify rate |
all | 10355 | 5179 | 4653 | 89.84% |
The method of the present invention constructs the Classification and Identification model based on neural network, passes through the training gait comprising multiple visual angles
The Matching Model based on convolutional neural networks that video sequence trains the model that training can be made to obtain has had across visual angle identification
The ability of gait.The inventive method all has considerable recognition capability, the party in single visual angle or across visual angle after tested
Method can be widely applied to video monitoring scene through a little extension.
The above, the only specific implementation mode in the present invention, but scope of protection of the present invention is not limited thereto, appoints
What is familiar with the people of the technology within the technical scope disclosed by the invention, it will be appreciated that expects transforms or replaces, and should all cover
Within the scope of the present invention, therefore, scope of the invention should be subject to the protection domain of power book.
Claims (5)
1. a kind of gait recognition method based on the dense convolutional neural networks of 3D, it is characterised in that:This method includes that data are located in advance
Reason, model training identify three processes, specific as follows;
Step S1, process of data preprocessing;
Step S1.1, pedestrian contour extraction;
It is modeled first with the picture figure viewed from behind containing only background, then directly extracts each frame middle row people's of video using background subtraction method
Binaryzation contour images;
Step S1.2, noise processed;
The binaryzation contour images of the obtained pedestrians of step S1.1 are utilized into the method elimination figure of Morphological scale-space in digital picture
Noise as in, and the missing of pixel position in moving target is filled up, to keep image more smooth, to obtain by noise processed
Best pedestrian contour image afterwards;
Step S1.3 extracts the boundary rectangle of pedestrian contour image;
BoundingBox is extracted from the pedestrian contour image that step S1.2 is obtained, the wherein maximum BoundingBox of area is
The boundary rectangle image of pedestrian contour;
Step S1.4, picture size normalization, centralization;
The boundary rectangle image for the pedestrian contour that process step S1.3 processing is obtained pedestrian contour shape in not changing image
In the case of, it is normalized to that size is identical and the image of all frame middle row people profiles alignment;
Step S1.5 obtains training sample;
Continuous N frames are a sample in the sequence of frames of video that step of learning from else's experience S1.1 is obtained to S1.4 processing, and sample label is should
Pedestrian ID in sequence of frames of video, N are an integer between 16 to 32;
Step S2, training process;
Step S1 is obtained training sample and corresponding ID inputs the dense convolutional neural networks of 3D by step S2.1;Extraction training sample
The further feature of sequence of frames of video in this;
Step S2.2, it is estimating for each ID that the further feature learnt using step S2.1 obtains sample classification through logistic regression again
Count probability;
Step S2.3 calculates true ID and predicts the error of classification results, and optimizes above-mentioned based on the dense convolutional neural networks of 3D
Disaggregated model;
Step S2.4 repeats step S2.1 to step S2.3 until the above-mentioned disaggregated model based on the dense convolutional neural networks of 3D is received
It holds back;
Step S3, identification process;
Step S3.1, video sequence to be identified obtain at least one test sample through step S1 processing;
Step S3.2, the disaggregated model that test sample is inputted to the dense convolutional neural networks of trained 3D obtain on each ID
Prediction probability;
Step S3.3 calculates the sum of prediction probability of the test sample for including in video sequence to be identified on each ID;
Step S3.4, the maximum prediction calculated through step S3.3 is generally and corresponding ID is to be obtained after gait recognition method
Identification identity.
2. a kind of gait recognition method based on the dense convolutional neural networks of 3D according to claim 1, it is characterised in that:
The training samples number of each ID should be identical as possible.
3. a kind of gait recognition method based on the dense convolutional neural networks of 3D according to claim 1, it is characterised in that:
Multiple video sequences of each ID of training include multiple visual angles.
4. a kind of gait recognition method based on the dense convolutional neural networks of 3D according to claim 1, it is characterised in that:
The training samples number of each ID different visual angles is identical.
5. a kind of gait recognition method based on the dense convolutional neural networks of 3D according to claim 1, it is characterised in that:
Video sequence to be identified obtains 3 to 5 samples through step S1 processing, weights all identification sample entirety recognition results.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201810113101.8A CN108460340A (en) | 2018-02-05 | 2018-02-05 | A kind of gait recognition method based on the dense convolutional neural networks of 3D |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201810113101.8A CN108460340A (en) | 2018-02-05 | 2018-02-05 | A kind of gait recognition method based on the dense convolutional neural networks of 3D |
Publications (1)
Publication Number | Publication Date |
---|---|
CN108460340A true CN108460340A (en) | 2018-08-28 |
Family
ID=63239679
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201810113101.8A Pending CN108460340A (en) | 2018-02-05 | 2018-02-05 | A kind of gait recognition method based on the dense convolutional neural networks of 3D |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN108460340A (en) |
Cited By (8)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN109583298A (en) * | 2018-10-26 | 2019-04-05 | 复旦大学 | Across visual angle gait recognition method based on set |
CN109766838A (en) * | 2019-01-11 | 2019-05-17 | 哈尔滨工程大学 | A kind of gait cycle detecting method based on convolutional neural networks |
CN109902605A (en) * | 2019-02-20 | 2019-06-18 | 哈尔滨工程大学 | A kind of gait recognition method based on monoergic figure adaptivenon-uniform sampling |
CN110070029A (en) * | 2019-04-17 | 2019-07-30 | 北京易达图灵科技有限公司 | A kind of gait recognition method and device |
CN110097029A (en) * | 2019-05-14 | 2019-08-06 | 西安电子科技大学 | Identity identifying method based on Highway network multi-angle of view Gait Recognition |
CN110222599A (en) * | 2019-05-21 | 2019-09-10 | 西安理工大学 | A kind of gait recognition method based on Gauss Map |
CN111160294A (en) * | 2019-12-31 | 2020-05-15 | 西安理工大学 | Gait recognition method based on graph convolution network |
CN112560778A (en) * | 2020-12-25 | 2021-03-26 | 万里云医疗信息科技(北京)有限公司 | DR image body part identification method, device, equipment and readable storage medium |
Citations (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN104281853A (en) * | 2014-09-02 | 2015-01-14 | 电子科技大学 | Behavior identification method based on 3D convolution neural network |
CN104299012A (en) * | 2014-10-28 | 2015-01-21 | 中国科学院自动化研究所 | Gait recognition method based on deep learning |
CN105760835A (en) * | 2016-02-17 | 2016-07-13 | 天津中科智能识别产业技术研究院有限公司 | Gait segmentation and gait recognition integrated method based on deep learning |
US9633268B1 (en) * | 2015-12-18 | 2017-04-25 | Beijing University Of Posts And Telecommunications | Method and device for gait recognition |
CN107103277A (en) * | 2017-02-28 | 2017-08-29 | 中科唯实科技(北京)有限公司 | A kind of gait recognition method based on depth camera and 3D convolutional neural networks |
CN107292250A (en) * | 2017-05-31 | 2017-10-24 | 西安科技大学 | A kind of gait recognition method based on deep neural network |
-
2018
- 2018-02-05 CN CN201810113101.8A patent/CN108460340A/en active Pending
Patent Citations (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN104281853A (en) * | 2014-09-02 | 2015-01-14 | 电子科技大学 | Behavior identification method based on 3D convolution neural network |
CN104299012A (en) * | 2014-10-28 | 2015-01-21 | 中国科学院自动化研究所 | Gait recognition method based on deep learning |
US9633268B1 (en) * | 2015-12-18 | 2017-04-25 | Beijing University Of Posts And Telecommunications | Method and device for gait recognition |
CN105760835A (en) * | 2016-02-17 | 2016-07-13 | 天津中科智能识别产业技术研究院有限公司 | Gait segmentation and gait recognition integrated method based on deep learning |
CN107103277A (en) * | 2017-02-28 | 2017-08-29 | 中科唯实科技(北京)有限公司 | A kind of gait recognition method based on depth camera and 3D convolutional neural networks |
CN107292250A (en) * | 2017-05-31 | 2017-10-24 | 西安科技大学 | A kind of gait recognition method based on deep neural network |
Non-Patent Citations (3)
Title |
---|
HUANG G 等: "Densely connected convolutional networks", 《2017 IEEE CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION》 * |
THOMAS WOLF 等: "MULTI-VIEW GAIT RECOGNITION USING 3D CONVOLUTIONAL NEURAL NETWORKS", 《2016 IEEE INTERNATIONAL CONFERENCE ON IMAGE PROCESSING》 * |
杨新武 等: "基于WPD和( 2D)2PCA 的步态识别方法", 《北京工业大学学报》 * |
Cited By (15)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN109583298B (en) * | 2018-10-26 | 2023-05-02 | 复旦大学 | Cross-view gait recognition method based on set |
CN109583298A (en) * | 2018-10-26 | 2019-04-05 | 复旦大学 | Across visual angle gait recognition method based on set |
CN109766838B (en) * | 2019-01-11 | 2022-04-12 | 哈尔滨工程大学 | Gait cycle detection method based on convolutional neural network |
CN109766838A (en) * | 2019-01-11 | 2019-05-17 | 哈尔滨工程大学 | A kind of gait cycle detecting method based on convolutional neural networks |
CN109902605A (en) * | 2019-02-20 | 2019-06-18 | 哈尔滨工程大学 | A kind of gait recognition method based on monoergic figure adaptivenon-uniform sampling |
CN109902605B (en) * | 2019-02-20 | 2023-04-07 | 哈尔滨工程大学 | Gait recognition method based on single energy map adaptive segmentation |
CN110070029A (en) * | 2019-04-17 | 2019-07-30 | 北京易达图灵科技有限公司 | A kind of gait recognition method and device |
CN110070029B (en) * | 2019-04-17 | 2021-07-16 | 北京易达图灵科技有限公司 | Gait recognition method and device |
CN110097029A (en) * | 2019-05-14 | 2019-08-06 | 西安电子科技大学 | Identity identifying method based on Highway network multi-angle of view Gait Recognition |
CN110097029B (en) * | 2019-05-14 | 2022-12-06 | 西安电子科技大学 | Identity authentication method based on high way network multi-view gait recognition |
CN110222599B (en) * | 2019-05-21 | 2021-09-10 | 西安理工大学 | Gait recognition method based on Gaussian mapping |
CN110222599A (en) * | 2019-05-21 | 2019-09-10 | 西安理工大学 | A kind of gait recognition method based on Gauss Map |
CN111160294B (en) * | 2019-12-31 | 2022-03-04 | 西安理工大学 | Gait recognition method based on graph convolution network |
CN111160294A (en) * | 2019-12-31 | 2020-05-15 | 西安理工大学 | Gait recognition method based on graph convolution network |
CN112560778A (en) * | 2020-12-25 | 2021-03-26 | 万里云医疗信息科技(北京)有限公司 | DR image body part identification method, device, equipment and readable storage medium |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN108460340A (en) | A kind of gait recognition method based on the dense convolutional neural networks of 3D | |
CN110458844B (en) | Semantic segmentation method for low-illumination scene | |
CN110084156B (en) | Gait feature extraction method and pedestrian identity recognition method based on gait features | |
CN108921019B (en) | Gait recognition method based on GEI and TripletLoss-DenseNet | |
Kocer et al. | Artificial neural networks based vehicle license plate recognition | |
CN104601964B (en) | Pedestrian target tracking and system in non-overlapping across the video camera room of the ken | |
CN112184752A (en) | Video target tracking method based on pyramid convolution | |
CN109871781A (en) | Dynamic gesture identification method and system based on multi-modal 3D convolutional neural networks | |
CN112597941A (en) | Face recognition method and device and electronic equipment | |
CN109359541A (en) | A kind of sketch face identification method based on depth migration study | |
CN108960059A (en) | A kind of video actions recognition methods and device | |
CN108805016B (en) | Head and shoulder area detection method and device | |
CN106548159A (en) | Reticulate pattern facial image recognition method and device based on full convolutional neural networks | |
CN109002755B (en) | Age estimation model construction method and estimation method based on face image | |
CN112614136B (en) | Infrared small target real-time instance segmentation method and device | |
CN111353385B (en) | Pedestrian re-identification method and device based on mask alignment and attention mechanism | |
WO2020254857A1 (en) | Fast and robust friction ridge impression minutiae extraction using feed-forward convolutional neural network | |
CN101131728A (en) | Face shape matching method based on Shape Context | |
CN111985332B (en) | Gait recognition method of improved loss function based on deep learning | |
CN111914762A (en) | Gait information-based identity recognition method and device | |
CN111539320A (en) | Multi-view gait recognition method and system based on mutual learning network strategy | |
CN110023989A (en) | A kind of generation method and device of sketch image | |
CN111340758A (en) | Novel efficient iris image quality evaluation method based on deep neural network | |
CN111666813B (en) | Subcutaneous sweat gland extraction method of three-dimensional convolutional neural network based on non-local information | |
CN117079095A (en) | Deep learning-based high-altitude parabolic detection method, system, medium and equipment |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination |