CN107122698A - A kind of real-time attendance statistical method of cinema based on convolutional neural networks - Google Patents
A kind of real-time attendance statistical method of cinema based on convolutional neural networks Download PDFInfo
- Publication number
- CN107122698A CN107122698A CN201610569831.XA CN201610569831A CN107122698A CN 107122698 A CN107122698 A CN 107122698A CN 201610569831 A CN201610569831 A CN 201610569831A CN 107122698 A CN107122698 A CN 107122698A
- Authority
- CN
- China
- Prior art keywords
- convolutional layer
- seat
- someone
- neural networks
- video
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V20/00—Scenes; Scene-specific elements
- G06V20/50—Context or environment of the image
- G06V20/52—Surveillance or monitoring of activities, e.g. for recognising suspicious objects
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/21—Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
- G06F18/214—Generating training patterns; Bootstrap methods, e.g. bagging or boosting
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/21—Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
- G06F18/214—Generating training patterns; Bootstrap methods, e.g. bagging or boosting
- G06F18/2155—Generating training patterns; Bootstrap methods, e.g. bagging or boosting characterised by the incorporation of unlabelled data, e.g. multiple instance learning [MIL], semi-supervised techniques using expectation-maximisation [EM] or naïve labelling
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/08—Learning methods
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V20/00—Scenes; Scene-specific elements
- G06V20/10—Terrestrial scenes
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Physics & Mathematics (AREA)
- Data Mining & Analysis (AREA)
- General Physics & Mathematics (AREA)
- Life Sciences & Earth Sciences (AREA)
- Artificial Intelligence (AREA)
- General Engineering & Computer Science (AREA)
- Evolutionary Computation (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Evolutionary Biology (AREA)
- Bioinformatics & Computational Biology (AREA)
- Bioinformatics & Cheminformatics (AREA)
- Multimedia (AREA)
- Computational Linguistics (AREA)
- Biophysics (AREA)
- Biomedical Technology (AREA)
- General Health & Medical Sciences (AREA)
- Molecular Biology (AREA)
- Computing Systems (AREA)
- Mathematical Physics (AREA)
- Software Systems (AREA)
- Health & Medical Sciences (AREA)
- Image Analysis (AREA)
Abstract
The invention discloses a kind of real-time attendance statistical method of cinema based on convolutional neural networks, image is read by disposing high-definition camera and thermal camera to coordinate in the monitor video of collection above cinema screen, detected using the good spectators' masterplate of training in advance and count attendance, it can fast and accurately detect that Detection accuracy of the invention in the situation of taking a seat at each seat, actual test reaches more than 99.2% by single model.
Description
Technical field
The present invention relates to technical field of image processing, and in particular on a kind of cinema based on convolutional neural networks is real-time
Seat rate statistical method.
Background technology
The attendance statistical method of prior art, such as patent name:Real-time attendance statistics side based on HD video
Method, application number:201310215445.7 patent application, its solution:Using largely using mark attend a banquet state image as
Training sample, to every image zooming-out gradient orientation histogram feature, then passes through kernel mapping to higher dimensional space by its feature again
Set up linear classifier;And in the condition discrimination stage of attending a banquet, the image of input is split using the scene seat coordinate demarcated, it is right
Each subgraph extracts gradient orientation histogram feature, utilizes the higher dimensional space linear classifier Model checking subgraph set up
Whether feature, taken so as to judge that this is attended a banquet by people, finally, the differentiation result of all subgraphs in statistics input picture, obtains
Take attendance current under the scene;Its shortcoming:1. extracting, feature is single, and discrimination is low;2. for the violent of cinema's illumination
Change, common single high-definition camera substantially can not.
And for example, patent name:Meeting-place attendance real-time statistical method based on multi-cam, application number 201310238694.8
Patent application, solution:Camera is installed in the front of seating area and top, with the back of the body in the picture at two kinds of visual angles
Scape difference algorithm filters out the seat of generating state change, effectively lowers algorithm complex, accomplishes calculating in real time, then to seat
Image zooming-out HOG features, are classified using SVMs (SVM), finally merge the classification results at two kinds of visual angles, are reduced and are hidden
The influence that flap is come, statistics draws meeting-place attendance;Its shortcoming:1. this method is under the scene of illumination acute variation, background is built
Mould is difficult, and system robustness is not strong;2. during viewing film, many people keep posture constant substantially, significantly impact this
Motion detection in scheme.
And for example, patent name:For the method and apparatus for the attendance for determining rail vehicle, application number:201380022250.9
Patent application, solution:In order to perform such method by particularly simple mode with enough precision, by
The open mode for the mobile phone for determining to be present on rail vehicle in detection means;By means of assessment unit from beating for detecting
The attendance of rail vehicle is determined in open state;Its shortcoming:1. the number of mobile phone can not replace the statistics of number entirely, for example
Many children not carrying mobile phone, and groups of people often carry multiple mobile phones;2. because people pass through under many occasions
Mobile phone is often closed, this method applicability under the scene as cinema is not strong.
And for example, patent name:Student's attendance monitoring system, application number in classroom:201220006732.8 practicality it is new
Type patent, solution:Student's attendance monitoring system in the classroom, it includes monitoring computer, is connected with monitoring computer
Multiple serial servers, multiple acquisition controllers for being connected with serial server, acquisition controller includes single-chip microcomputer and setting
The attendance value of information is passed to serial port service by the sensor on classroom seat, the state that single-chip microcomputer scanning seat uploads sensor
Device.The acquisition controller is connected with serial server by M-BUS buses, and the serial server passes through with monitoring computer
The netting twine connection of ICP/IP protocol, the acquisition controller also includes memory, speech chip, the loudspeaker being sequentially connected;Its
Shortcoming:1. complexity is installed, it is necessary to which hardware is excessive;2. carry-on articles, such as portable coatingss and other items, in easy triggering system
Mechanical switch, cause error statistics.
The content of the invention
The present invention provides a kind of real-time attendance of the cinema based on convolutional neural networks to solve prior art problem
Statistical method.
Technical scheme is as follows:The real-time attendance statistical method of cinema based on convolutional neural networks, including
Following steps:Step 1, high-definition camera and thermal camera, high-definition camera and infrared photography are fixed before movie theatre screen
Machine enrolls live seat video, high-definition camera admission color video, thermal camera admission grey video respectively;Step 2,
The position at each seat is marked with rectangle frame in the live seat video of admission, and receives the picture of the lower position of shearing;It is step 3, logical
Cross the method manually demarcated and obtain unmanned seating maps picture and someone's seating maps picture, pretreatment is carried out to these images and strengthens acquisition
Effective square training image;Step 4, will effective image send into deep layer convolutional neural networks in carry out training network;Step 5, general
Train obtained network model to be verified on checking collection, according to result adjusting training collection and continue the depth in training step 4
Layer convolutional neural networks;Step 6, the network trained tested on test set;Step 7, real-time acquisition monitoring video
The seat picture demarcated in middle step 2, cumulative recognition result simultaneously counts attendance.
Preferred scheme, repeats the 4th step until the Detection accuracy on checking collection reaches target or network losses
Function starts convergence.
The picture at seat in preferred scheme, interception video pictures, then 10000 someone seat pictures are gone out by manual sorting
With 10000 unmanned seat pictures, these pictures are divided into four image sets:Training set(8000 someone, 8000 nobody),
Supplemental training collection(1000 someone, 1000 nobody), checking collection(500 someone, 500 nobody), test set(500 have
People, 500 nobody).
Deep layer convolutional neural networks in preferred scheme, step 4 include:Convolutional layer 1, including 100 groups of convolution kernels, every group of volume
The size of product core is 3*3, and the step-length of convolution is 1;The 100 width characteristic images that convolution is obtained pass through RELU Nonlinear Mappings and one
Core is the down-sampling that 2*2 step-lengths are 2, then the 100 width characteristic patterns obtained after a regularization are sent to convolutional layer 2;Convolutional layer
2, including step and convolutional layer 1 it is identical, unlike:There are 200 groups of filtering cores, be 2*2*100 per packet size, convolution step-length is 1.
Other and convolutional layer 1 is identical, and convolutional layer 3 is sent in output;Convolutional layer 3, including step and convolutional layer 1 it is identical, unlike:Have
300 groups of filtering cores, are 2*2*200 per packet size, and convolution step-length is that 1. is other and convolutional layer 1 is identical;Convolutional layer 4 is sent in output;
Convolutional layer, including step and convolutional layer 1 it is identical, unlike:There are 400 groups of filtering cores, be 2*2*300, convolution step per packet size
Full articulamentum 1 is sent in a length of 1. other outputs identical with convolutional layer 1;Full articulamentum 1, including 500 nodes, each node enter
One probability of row is 50% dropout, and the output of each node carries out a RELU Nonlinear Mapping as final output,
As a result it is sent to full articulamentum 2;Full articulamentum 2, including 500 nodes, operate with full articulamentum 1, are as a result sent to softmax layers;
Softmax classification layers, including 2 outputs represent someone and nobody respectively.
Beneficial effects of the present invention are:By disposing high-definition camera and thermal camera to coordinate above cinema screen
Image is read in the monitor video of collection, is detected using the good spectators' masterplate of training in advance and counts attendance, it can pass through
Single model fast and accurately detects that Detection accuracy of the invention in the situation of taking a seat at each seat, actual test reaches
More than 99.2%.
Brief description of the drawings
Fig. 1 is the inventive method schematic diagram;
Fig. 2 is the deep layer convolutional neural networks structure principle chart of the inventive method.
Embodiment
The explanation of following embodiment is the particular implementation implemented to illustrate the present invention can be used to reference to additional schema
Example.The direction term that the present invention is previously mentioned, such as " on ", " under ", "front", "rear", "left", "right", " interior ", " outer ", " side "
Deng being only the direction with reference to annexed drawings.Therefore, the direction term used is to illustrate and understand the present invention, and is not used to
The limitation present invention.In figure, the similar unit of structure is represented with identical label.
As shown in Figure 1 and Figure 2, the real-time attendance statistical method of cinema based on convolutional neural networks, collects cinema high
The monitor video that clear video camera and thermal camera are shot, manually calibrates the position of each seat in video.Intercept video
The picture at seat in picture, then 10000 someone seat pictures and 10000 unmanned seat pictures are gone out by manual sorting, will
These pictures are divided into four image sets:Training set(8000 someone, 8000 nobody), supplemental training collection(1000 someone,
1000 nobody), checking collection(500 someone, 500 nobody), test set(500 someone, 500 nobody).
Effectively training region is carried out to training set, two image sets of supplemental training collection to obtain and data enhancing and all pictures
Pre-treatment:1)All pictures all scalings to 48*48 pixels;2)Mark picture region acquisition is the region of someone and cut out at random
Cut;Only needing to generate 10 40*40 subgraph at random for unmanned seating maps picture, scaling is 48*48 as training figure to be reinforced again
Picture;For someone's seating maps picture, on the basis of someone region demarcated in advance, the subgraph of 10 40*40 pixels of random cropping
(Ensure that reduced subgraph and the registration in someone region of demarcation are more than 90%), equally scaling is 48*48 as waiting to increase again
Strong training image;3)All square chart pictures that previous step is obtained all are carried out a variety of conversion by the data enhancing of effective training image
To strengthen the number of training data.Specific method is:Transposition and horizontal mirror image switch are carried out to image;Between 0.5-1.5 with
Machine choose 4 value as variance to image carry out Gaussian Blur, then randomly choose 4 values as the factor be multiplied by all pixels progress
Luminance transformation;And random salt-pepper noise is added to picture;4)All pictures are switched to gray-scale map, are that next step training is prepared.
Deep layer convolutional neural networks designed by the present invention have 7 layers(It is respectively from left to right 4 convolutional layers, 2 connect entirely
Connect layer, 1 softmax layers).Each layer of parameter is described as follows:Convolutional layer 1:100 groups of convolution kernels, the size of every group of convolution kernel
For 3*3, the step-length of convolution is 1;The 100 width characteristic images that convolution is obtained are walked by RELU Nonlinear Mappings and a core for 2*2
A length of 2 down-sampling, then the 100 width characteristic patterns obtained after a regularization are sent to convolutional layer 2;Convolutional layer 2:Step and volume
Lamination 1 is identical, unlike:There are 200 groups of filtering cores, be 2*2*100 per packet size, convolution step-length is 1, other and convolutional layer 1
Identical, convolutional layer 3 is sent in output;Convolutional layer 3:Step and convolutional layer 1 are identical, unlike:There are 300 groups of filtering cores, every group of chi
Very little is 2*2*200, and convolution step-length is 1, and other and convolutional layer 1 is identical;Convolutional layer 4 is sent in output;Convolutional layer 4:Step and convolution
Layer 1 is identical, unlike:There are 400 groups of filtering cores, be 2*2*300 per packet size, convolution step-length is 1, the other and phase of convolutional layer 1
Full articulamentum 1 is sent to output;Full articulamentum 1:500 nodes, each node carries out the dropout that a probability is 50%,
The output of each node carries out a RELU Nonlinear Mapping as final output, is as a result sent to full articulamentum 2;Full articulamentum
2:500 nodes, operate with full articulamentum 1, are as a result sent to softmax layers;Softmax classification layers:2 outputs are represented respectively
Someone and nobody.
Network training strategy, uses 2)In ready data train the network, when network losses function convergence, will instruct
The model got is tested on checking collection, and the result for detection mistake is analyzed, according to the class of the image of mistake
Type is focused to find out some corresponding types in supplemental training(The specific sitting posture of such as spectators, inhuman debris)Image be added to training set
In, proceed training to network.
Repeat 2), until network losses function convergence or the testing result stabilization on checking collection, this network parameter for being
What is as trained has the parameter of the deep layer convolutional neural networks of detection pornographic image function, can be surveyed on test set
Examination.
Input picture is detected and counts attendance rate, the input of network is 48*48 during due to training, therefore handle is from prison
The image of the position at epitope seat plucks out and zooms to 48*48 in the picture obtained in control video recording, then each seating maps picture
Deep layer convolutional neural networks involved in the present invention are sent into, statistics is output as the number at someone seat, according to calculation formula " attendance rate
=spectators number/seating capacity " calculates current attendance rate.
Have the beneficial effect that:By disposing high-definition camera and thermal camera to coordinate the prison of collection above cinema screen
Image is read in control video, is detected using the good spectators' masterplate of training in advance and counts attendance, it can pass through single model
Fast and accurately detect that Detection accuracy of the invention in the situation of taking a seat at each seat, actual test reaches more than 99.2%.
Claims (4)
1. the real-time attendance statistical method of cinema based on convolutional neural networks, it is characterised in that comprise the following steps:
Step 1, fix high-definition camera and thermal camera before movie theatre screen, high-definition camera is distinguished with thermal camera
The live seat video of admission, high-definition camera admission color video, thermal camera admission grey video;
Step 2, in the live seat video of admission the position at each seat is marked with rectangle frame, and receive the figure of the lower position of shearing
Piece;
These images are located by step 3, the unmanned seating maps picture of method acquisition and someone's seating maps picture by manually demarcating in advance
Reason obtains effective square training image with enhancing;
Step 4, will effective image send into deep layer convolutional neural networks in carry out training network;
Step 5, obtained network model will be trained to be verified on checking collection, and according to result adjusting training collection and continue to train
Deep layer convolutional neural networks in step 4;
Step 6, the network trained tested on test set;
The seat picture demarcated in step 7, real-time acquisition monitoring video in step 2, cumulative recognition result simultaneously counts attendance.
2. according to the method described in claim 1, it is characterised in that repeat the 4th step until verifying the Detection accuracy on collecting
Reach that target or network losses function start convergence.
3. according to the method described in claim 1, it is characterised in that the picture at seat in interception video pictures, then by artificial
10000 someone seat pictures and 10000 unmanned seat pictures are sorted out, these pictures are divided into four image sets:Training
Collection(8000 someone, 8000 nobody), supplemental training collection(1000 someone, 1000 nobody), checking collection(500 someone,
500 nobody), test set(500 someone, 500 nobody).
4. according to the method described in claim 1, it is characterised in that the deep layer convolutional neural networks in step 4 include:
Convolutional layer 1, including 100 groups of convolution kernels, the size of every group of convolution kernel is 3*3, and the step-length of convolution is 1;Convolution obtain 100
Width characteristic image is obtained by the down-sampling that RELU Nonlinear Mappings and a core are that 2*2 step-lengths are 2, then after a regularization
To 100 width characteristic patterns be sent to convolutional layer 2;
Convolutional layer 2, including step and convolutional layer 1 it is identical, unlike:There are 200 groups of filtering cores, be 2*2*100, volume per packet size
Product step-length is that 1. is other and convolutional layer 1 is identical, and convolutional layer 3 is sent in output;
Convolutional layer 3, including step and convolutional layer 1 it is identical, unlike:There are 300 groups of filtering cores, be 2*2*200, volume per packet size
Product step-length is that 1. is other and convolutional layer 1 is identical;Convolutional layer 4 is sent in output;
Convolutional layer, including step and convolutional layer 1 it is identical, unlike:There are 400 groups of filtering cores, be 2*2*300, volume per packet size
Product step-length is that 1. is other and full articulamentum 1 is sent in the identical output of convolutional layer 1;
Full articulamentum 1, including 500 nodes, each node carry out the dropout that a probability is 50%, each node it is defeated
Go out all to carry out a RELU Nonlinear Mapping as final output, be as a result sent to full articulamentum 2;
Full articulamentum 2, including 500 nodes, operate with full articulamentum 1, are as a result sent to softmax layers;
Softmax classification layers, including 2 outputs represent someone and nobody respectively.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201610569831.XA CN107122698A (en) | 2016-07-19 | 2016-07-19 | A kind of real-time attendance statistical method of cinema based on convolutional neural networks |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201610569831.XA CN107122698A (en) | 2016-07-19 | 2016-07-19 | A kind of real-time attendance statistical method of cinema based on convolutional neural networks |
Publications (1)
Publication Number | Publication Date |
---|---|
CN107122698A true CN107122698A (en) | 2017-09-01 |
Family
ID=59717099
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201610569831.XA Pending CN107122698A (en) | 2016-07-19 | 2016-07-19 | A kind of real-time attendance statistical method of cinema based on convolutional neural networks |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN107122698A (en) |
Cited By (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN108010049A (en) * | 2017-11-09 | 2018-05-08 | 华南理工大学 | Split the method in human hand region in stop-motion animation using full convolutional neural networks |
CN110032930A (en) * | 2019-03-01 | 2019-07-19 | 中南大学 | A kind of classroom demographic method and its system, device, storage medium |
CN111241993A (en) * | 2020-01-08 | 2020-06-05 | 咪咕文化科技有限公司 | Seat number determination method and device, electronic equipment and storage medium |
CN112149768A (en) * | 2020-09-17 | 2020-12-29 | 北京计算机技术及应用研究所 | Method for counting number of cinema audiences by combining video monitoring and radio frequency identification |
CN112861608A (en) * | 2020-12-30 | 2021-05-28 | 浙江万里学院 | Detection method and system for distracted driving behaviors |
CN113792674A (en) * | 2021-09-17 | 2021-12-14 | 支付宝(杭州)信息技术有限公司 | Method and device for determining unoccupied seat rate and electronic equipment |
Citations (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN102646297A (en) * | 2012-04-27 | 2012-08-22 | 徐汉阳 | Intelligent theatre chain attendance statistical system and intelligent theatre chain attendance statistical method |
US20150254532A1 (en) * | 2014-03-07 | 2015-09-10 | Qualcomm Incorporated | Photo management |
CN104992177A (en) * | 2015-06-12 | 2015-10-21 | 安徽大学 | Network pornographic image detection method based on deep convolutional neural network |
-
2016
- 2016-07-19 CN CN201610569831.XA patent/CN107122698A/en active Pending
Patent Citations (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN102646297A (en) * | 2012-04-27 | 2012-08-22 | 徐汉阳 | Intelligent theatre chain attendance statistical system and intelligent theatre chain attendance statistical method |
US20150254532A1 (en) * | 2014-03-07 | 2015-09-10 | Qualcomm Incorporated | Photo management |
CN104992177A (en) * | 2015-06-12 | 2015-10-21 | 安徽大学 | Network pornographic image detection method based on deep convolutional neural network |
Non-Patent Citations (3)
Title |
---|
ALEX KRIZHEVSKY等: "ImageNet Classification with Deep Convolutional Neural Networks", 《CONFERENCE AND WORKSHOP ON NEURAL INFORMATION PROCESSING SYSTEMS》 * |
左丞: "基于图像处理与神经网络进行人数统计的研究与实现", 《中国优秀硕士学位论文全文数据库_信息科技辑》 * |
胡正平等: "卷积神经网络分类模型在模式识别中的新进展", 《燕山大学学报》 * |
Cited By (9)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN108010049A (en) * | 2017-11-09 | 2018-05-08 | 华南理工大学 | Split the method in human hand region in stop-motion animation using full convolutional neural networks |
CN110032930A (en) * | 2019-03-01 | 2019-07-19 | 中南大学 | A kind of classroom demographic method and its system, device, storage medium |
CN111241993A (en) * | 2020-01-08 | 2020-06-05 | 咪咕文化科技有限公司 | Seat number determination method and device, electronic equipment and storage medium |
CN111241993B (en) * | 2020-01-08 | 2023-10-20 | 咪咕文化科技有限公司 | Seat number determining method and device, electronic equipment and storage medium |
CN112149768A (en) * | 2020-09-17 | 2020-12-29 | 北京计算机技术及应用研究所 | Method for counting number of cinema audiences by combining video monitoring and radio frequency identification |
CN112149768B (en) * | 2020-09-17 | 2024-05-14 | 北京计算机技术及应用研究所 | Method for counting cinema audience number by integrating video monitoring and radio frequency identification |
CN112861608A (en) * | 2020-12-30 | 2021-05-28 | 浙江万里学院 | Detection method and system for distracted driving behaviors |
CN113792674A (en) * | 2021-09-17 | 2021-12-14 | 支付宝(杭州)信息技术有限公司 | Method and device for determining unoccupied seat rate and electronic equipment |
CN113792674B (en) * | 2021-09-17 | 2024-03-26 | 支付宝(杭州)信息技术有限公司 | Method and device for determining empty rate and electronic equipment |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN107122698A (en) | A kind of real-time attendance statistical method of cinema based on convolutional neural networks | |
CN108334848B (en) | Tiny face recognition method based on generation countermeasure network | |
CN106845357B (en) | A kind of video human face detection and recognition methods based on multichannel network | |
Yi et al. | EagleEye: Wearable camera-based person identification in crowded urban spaces | |
CN103824070B (en) | A kind of rapid pedestrian detection method based on computer vision | |
CN110210276A (en) | A kind of motion track acquisition methods and its equipment, storage medium, terminal | |
CN109284738B (en) | Irregular face correction method and system | |
CN104933414B (en) | A kind of living body faces detection method based on WLD-TOP | |
CN104463117B (en) | A kind of recognition of face sample collection method and system based on video mode | |
CN109598242B (en) | Living body detection method | |
CN106897698B (en) | Classroom people number detection method and system based on machine vision and binocular collaborative technology | |
CN109376637A (en) | Passenger number statistical system based on video monitoring image processing | |
CN107330390B (en) | People counting method based on image analysis and deep learning | |
CN108334847A (en) | A kind of face identification method based on deep learning under real scene | |
TWI769787B (en) | Target tracking method and apparatus, storage medium | |
Liu et al. | VisDrone-CC2021: the vision meets drone crowd counting challenge results | |
CN108564673A (en) | A kind of check class attendance method and system based on Global Face identification | |
CN108596041A (en) | A kind of human face in-vivo detection method based on video | |
Li et al. | Image manipulation localization using attentional cross-domain CNN features | |
CN114241422A (en) | Student classroom behavior detection method based on ESRGAN and improved YOLOv5s | |
CN112036257A (en) | Non-perception face image acquisition method and system | |
CN113536972A (en) | Self-supervision cross-domain crowd counting method based on target domain pseudo label | |
CN113762009B (en) | Crowd counting method based on multi-scale feature fusion and double-attention mechanism | |
CN110532959B (en) | Real-time violent behavior detection system based on two-channel three-dimensional convolutional neural network | |
CN108198202A (en) | A kind of video content detection method based on light stream and neural network |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
WD01 | Invention patent application deemed withdrawn after publication | ||
WD01 | Invention patent application deemed withdrawn after publication |
Application publication date: 20170901 |