[go: up one dir, main page]
More Web Proxy on the site http://driver.im/

CN109711322A - A kind of people's vehicle separation method based on RFCN - Google Patents

A kind of people's vehicle separation method based on RFCN Download PDF

Info

Publication number
CN109711322A
CN109711322A CN201811585378.7A CN201811585378A CN109711322A CN 109711322 A CN109711322 A CN 109711322A CN 201811585378 A CN201811585378 A CN 201811585378A CN 109711322 A CN109711322 A CN 109711322A
Authority
CN
China
Prior art keywords
target
people
rfcn
vehicle
image
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201811585378.7A
Other languages
Chinese (zh)
Inventor
刘珊
瞿关明
朱健立
谢自强
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Tianjin Tiandi Weiye Information System Integration Co Ltd
Original Assignee
Tianjin Tiandi Weiye Information System Integration Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Tianjin Tiandi Weiye Information System Integration Co Ltd filed Critical Tianjin Tiandi Weiye Information System Integration Co Ltd
Priority to CN201811585378.7A priority Critical patent/CN109711322A/en
Publication of CN109711322A publication Critical patent/CN109711322A/en
Pending legal-status Critical Current

Links

Landscapes

  • Image Analysis (AREA)

Abstract

The present invention provides a kind of people's vehicle separation method based on RFCN, obtains the video frame images of monitored picture directly from camera including computer, is simply pre-processed to image;Whole image is detected using preparatory trained deep learning detection model, all people, vehicle target in image are identified and positioned, boundary rectangle, classification and the confidence level of target are exported;Computer post-processes above-mentioned testing result, and the removal boundary rectangle frame that confidence level is low, degree of overlapping is high is exported as final result;The continuous video frame images of comprehensive analysis, judge the motion state of target, the reference conditions into the time of scene, as judgement triggering warning.People's vehicle separation method of the present invention based on RFCN can accurately realize that the positioning of the judgement and position to target category returns, and the target that not only can detecte movement also can detecte static target, can accurately filter out inhuman, vehicle target.

Description

A kind of people's vehicle separation method based on RFCN
Technical field
The invention belongs to field of video monitoring, more particularly, to a kind of people's vehicle separation method based on RFCN.
Background technique
Along with the development and social progress of new and high technology, video monitoring system obtains extensively in more and more occasions Application.The camera vigilance performance of monitoring system is unable to satisfy a variety of different complex scenes and more and more differences and needs at present It asks.Be mainly reflected in, warning scene requirement monitoring system can not only accurate detection go out the target in scene, and can also from The position of the dynamic classification for identifying target and place.Main monitoring objective under daily warning scene is people and Che, so needing The target category of detection is also people or vehicle, it is therefore desirable to which a kind of method is accurate to the target progress of detection to identify classification.It is existing Some warning cameras can be substantially real using the detection such as cascade detection, support vector machines or simple BP neural network classification method Now to the classification arbitration functions of detection target, but the scene that these methods adapt to is limited, and to the standard of target category detection True rate is lower, is even more especially deficiency to the identification of Small object and classifying quality, is located in advance furthermore with background modeling technology The method of reason can only detect the target of movement, be unable to satisfy the demand of static target in detection scene, it is difficult to wide popularization and application.
Summary of the invention
In view of this, the present invention is directed to propose a kind of people's vehicle separation method based on RFCN, for realizing monitoring scene Under vigilance performance, realization all targets in monitoring scene are fast and accurately positioned and are identified, facilitate relevant people The target category of member's analysis triggering alarm and the motion state of target.
In order to achieve the above objectives, the technical scheme of the present invention is realized as follows:
A kind of people's vehicle separation method based on RFCN includes the following steps: that 1) computer obtains monitoring directly from camera The video frame images of picture simply pre-process image;2) preparatory trained deep learning detection model pair is utilized Whole image is detected, and is identified and positioned to all people, vehicle target in image, is exported boundary rectangle, the class of target Other and confidence level;3) computer post-processes the testing result in 2), and it is external that removal confidence level is low, degree of overlapping is high Rectangle frame is exported as final result;4) computer generalization analyzes continuous video frame images, judges the movement shape of target State, the reference conditions into the time of scene, as judgement triggering warning.
Further, the specific method of the step 1) is handled image noise reduction including the use of Mean Filtering Algorithm, if More fuzzy picture suitably does acutance processing at night, then by whole image inspection.
Further, the method detected in the step 2) to whole image includes:
201) people, the vehicle sample set of training pattern are constructed;
202) the deep learning network architecture based on RFCN is built;
203) related training parameter, training pattern are configured;
204) detection and the output test result to people, vehicle are realized with training pattern.
Further, the people of building training pattern in the step 201), the specific method of vehicle training sample set include: from It is obtained in a large amount of actual monitored scene in a variety of actual scenes such as video frame images, including different angle, different light conditions People, vehicle image with triggering warning alert if.Data enhancing, including rotation, mirror image, sanction are carried out at random to original image Cut, histogram equalization, gray proces, change brightness, saturation degree, the different enhancement methods such as contrast random combine, thus Generate sample set abundant.Then marked in the picture using annotation tool people, vehicle boundary rectangle frame and set generic, Image and corresponding mark file are saved and are used as sample set.
Further, it includes: to adopt that the method based on the RFCN building deep learning network architecture is built in the step 202) It uses ResNet-18 network as the basic network of image characteristics extraction, the characteristic pattern that basic network exports is input to RFCN mesh Mark detection framework, the final detection and identification realized to target;Wherein the size and ratio of anchors should meet sample set as far as possible The scale and proportional region of middle target utilize the position (x, y, w, h) of prediction block, confidence level confidence, true convenient for network The position of real value and classification carry out costing bio disturbance, calculate the loss reduction so that final by iterating.
Further, it is 0.001 that configuration training parameter, which is mainly the initial learning rate of setting network, in the step 203), And every to carry out total four times of training samples number of the number of iterations, learning rate is reduced to original 0.5 times, and total the number of iterations is generally set It is set to ten times of total number of samples amount.Model training is iterated instruction according to backpropagation thought, using stochastic gradient descent algorithm Practice, the loss value of network is made to be down to lower value.
Further, the detection to people, vehicle is realized with training pattern in the step 204) and output test result is main Image to be detected is detected including the use of preparatory trained model, the testing result of model output includes the external of target Classification and corresponding confidence level belonging to the coordinate of rectangle frame, target.
Further, the method post-processed in the step 3) to testing result mainly includes according to each target A threshold value is arranged in the value of the confidence, removes the lower target of the value of the confidence, and an IOU parameter is arranged, and removes the higher target of degree of overlapping The value of the confidence is lower in frame, and what these frames may detect is the same target, only retains a highest detection block of the value of the confidence and is It can.According to the size of detection block, removal does not meet the testing result of target sizes range in actual scene.
Further, the continuous video frame images of comprehensive analysis in the step 4), mainly according to the detection of successive frame As a result the position of the same target exported in judges that the state of the target is static or movement, and can be to the movement of target Track is tracked.According to detecting that the video frame number of same target judges that target enters the time of scene, triggered as judgement The reference conditions of warning.
Compared with the existing technology, people's vehicle separation method of the present invention based on RFCN has the advantage that
(1) people's vehicle separation method of the present invention based on RFCN carries out feature extraction using ResNet18 network, adopts It uses RFCN as detection framework, may be implemented to carry out target in image accurate kind judging and position returns, to small mesh Target detection effect is also preferable, and model has stronger generalization ability, can satisfy a variety of different scene demands.Model Directly whole image is detected, the target that not only can detecte movement can also detect static target, model Erroneous detection is less can accurately to filter out inhuman, vehicle target.Monitor video passes through the analysis of this algorithm, realizes to the target in scene Intelligent recognition, classification and screening, can be significantly reduced the manpower and time cost of manual analysis, while have higher detection again Accuracy rate, to realize that the intellectual analysis of monitor video is laid a good foundation.
Detailed description of the invention
The attached drawing for constituting a part of the invention is used to provide further understanding of the present invention, schematic reality of the invention It applies example and its explanation is used to explain the present invention, do not constitute improper limitations of the present invention.In the accompanying drawings:
Fig. 1 is the flow diagram of people's vehicle separation method based on RFCN described in the embodiment of the present invention.
Specific embodiment
It should be noted that in the absence of conflict, the feature in embodiment and embodiment in the present invention can phase Mutually combination.
In the description of the present invention, it is to be understood that, term " center ", " longitudinal direction ", " transverse direction ", "upper", "lower", The orientation or positional relationship of the instructions such as "front", "rear", "left", "right", "vertical", "horizontal", "top", "bottom", "inner", "outside" is It is based on the orientation or positional relationship shown in the drawings, is merely for convenience of description of the present invention and simplification of the description, rather than instruction or dark Show that signified device or element must have a particular orientation, be constructed and operated in a specific orientation, therefore should not be understood as pair Limitation of the invention.In addition, term " first ", " second " etc. are used for description purposes only, it is not understood to indicate or imply phase To importance or implicitly indicate the quantity of indicated technical characteristic.The feature for defining " first ", " second " etc. as a result, can To explicitly or implicitly include one or more of the features.In the description of the present invention, unless otherwise indicated, " multiple " It is meant that two or more.
In the description of the present invention, it should be noted that unless otherwise clearly defined and limited, term " installation ", " phase Even ", " connection " shall be understood in a broad sense, for example, it may be being fixedly connected, may be a detachable connection, or be integrally connected;It can To be mechanical connection, it is also possible to be electrically connected;It can be directly connected, can also can be indirectly connected through an intermediary Connection inside two elements.For the ordinary skill in the art, above-mentioned term can be understood by concrete condition Concrete meaning in the present invention.
The present invention will be described in detail below with reference to the accompanying drawings and embodiments.
Explanation of nouns
RFCN is a kind of network structure.
Anchors indicates set.
Loss value loss function.
A kind of people's vehicle separation method based on RFCN, as shown in Figure 1, including the following steps:
1) video frame images that monitored picture is obtained directly from camera, simply pre-process image.
2) whole image is detected using preparatory trained deep learning detection model, to all in image People, vehicle target identify and position, and export boundary rectangle, classification and the confidence level of target.
3) 2) testing result in is post-processed, the removal boundary rectangle frame that confidence level is low, degree of overlapping is high, as Final result output.
4) the continuous video frame images of comprehensive analysis judge the motion state of target, into the time of scene, as judgement Trigger the reference conditions of warning.
The specific method of step 1) is handled image noise reduction including the use of Mean Filtering Algorithm, if more fuzzy at night Picture can suitably do acutance processing, then by whole image inspection.
The method of the step 2) detection includes:
201) people, the vehicle sample set of training pattern are constructed;
202) the deep learning network architecture based on RFCN is built;
203) related training parameter, training pattern are configured;
204) detection and the output test result to people, vehicle are realized with training pattern.
Further, step 201) the building people of training pattern, vehicle training sample set specific method include: from It is obtained in a large amount of actual monitored scene in a variety of actual scenes such as video frame images, including different angle, different light conditions People, vehicle image with triggering warning alert if.Data enhancing, including rotation, mirror image, sanction are carried out at random to original image Cut, histogram equalization, gray proces, change brightness, saturation degree, the different enhancement methods such as contrast random combine, thus Generate sample set abundant.Then marked in the picture using annotation tool people, vehicle boundary rectangle frame and set generic, Image and corresponding mark file are saved and are used as sample set.
Further, described build based on the RFCN method for constructing the deep learning network architecture of step 202) includes: to adopt It uses ResNet-18 network as the basic network of image characteristics extraction, the characteristic pattern that basic network exports is input to RFCN mesh Mark detection framework, the final detection and identification realized to target.Wherein the size and ratio of anchors should meet sample set as far as possible The scale and proportional region of middle target, convenient for network using prediction block position (x, y, w, h) and confidence level confidence and The position of true value and classification carry out costing bio disturbance, calculate the loss reduction so that final by iterating.
Further, it is 0.001 that step 203) the configuration training parameter, which is mainly the initial learning rate of setting network, And every to carry out total four times of training samples number of the number of iterations, learning rate is reduced to original 0.5 times, and total the number of iterations is generally set It is set to ten times of total number of samples amount.Model training is iterated instruction according to backpropagation thought, using stochastic gradient descent algorithm Practice, the loss value of network is made to be down to lower value.The loss of network points are Classification Loss L_conf (x, c) and recurrence loss L_loc (x, l, g), total loss are the weighted sums of the two.Utilize the position l (x, y, w, h) and the value of the confidence of the prediction block of network output Confidence is calculated with true value g (x, y, w, h) and is lost, and obtains final loss, calculation formula are as follows:
Wherein i indicates i-th of prediction block, and j indicates that target generic, y_ij indicate class represented by i-th of prediction block Whether do not matched with j-th of classification, otherwise matching as 1 is that 0, x_ij indicates that i-th of prediction block belongs to representated by j-th of true frame Classification probability, the loss of L_conf (x, c) presentation class.X_i is indicated if between i-th of prediction block and true frame IOU is greater than 0.7 and is equal to 1, and 0 is equal to when less than 0.3, is otherwise not involved in training.In the actual process, if N_cls and N_loc The gap of setting is excessive, and can balancing the two with parameter beta, (such as N_cls=256, N_loc=2400, then 10) β may be configured as.
Further, simultaneously output test result is main for the step 204) detection with training pattern realization to people, vehicle Image to be detected is detected including the use of preparatory trained model, the testing result of model output includes the external of target Classification and corresponding confidence level belonging to the coordinate of rectangle frame, target.
Post-processing approach described in step 3) mainly includes that a threshold value 0.7 is arranged, goes according to the value of the confidence of each target Except the lower target of the value of the confidence, caused by these targets may be network erroneous detection.One IOU parameter 0.6 is set, degree of overlapping is removed The value of the confidence is lower in higher target frame, and what these frames may detect is the same target, only retains a value of the confidence highest Detection block.According to the size of detection block, removal does not meet the testing result of target sizes range in actual scene.
Step 4) the comprehensive analysis continuous video frame images, mainly according to being exported in the testing result of successive frame The position of same target judges that the state of the target is static or movement, and can track to the motion profile of target. According to detecting that the video frame number of same target judges that target enters the time of scene, the reference item as judgement triggering warning Part.
The foregoing is merely illustrative of the preferred embodiments of the present invention, is not intended to limit the invention, all in essence of the invention Within mind and principle, any modification, equivalent replacement, improvement and so on be should all be included in the protection scope of the present invention.

Claims (9)

1. a kind of people's vehicle separation method based on RFCN, it is characterised in that: specifically comprise the following steps:
1) computer obtains the video frame images of monitored picture directly from camera, is simply pre-processed to image;
2) whole image is detected using preparatory trained deep learning detection model, to all people, vehicle in image Target is identified and positioned, and boundary rectangle, classification and the confidence level of target are exported;
3) computer post-processes the testing result in 2), and the removal boundary rectangle frame that confidence level is low, degree of overlapping is high is made It is exported for final result;
4) computer generalization analyzes continuous video frame images, judges the motion state of target, into the time of scene, as sentencing The reference conditions of disconnected triggering warning.
2. a kind of people's vehicle separation method based on RFCN according to claim 1, it is characterised in that: the tool of the step 1) Body method is handled image noise reduction including the use of Mean Filtering Algorithm, if more fuzzy picture is suitably done at acutance at night Reason, then by whole image inspection.
3. a kind of people's vehicle separation method based on RFCN according to claim 2, it is characterised in that: right in the step 2) The method that whole image is detected includes:
201) people, the vehicle sample set of training pattern are constructed;
202) the deep learning network architecture based on RFCN is built;
203) related training parameter, training pattern are configured;
204) detection and the output test result to people, vehicle are realized with training pattern.
4. a kind of people's vehicle separation method based on RFCN according to claim 3, it is characterised in that: in the step 201) The specific method of the people, vehicle training sample set that construct training pattern include: to obtain video frame from a large amount of actual monitored scene Image, including having people, the Che Tu of triggering warning alert if in a variety of actual scenes such as different angle, different light conditions Picture.Data enhancing carried out at random to original image, including rotation, mirror image, cutting, histogram equalization, gray proces, change it is bright The random combine of the difference enhancement method such as degree, saturation degree, contrast, to generate sample set abundant.Then using mark work Tool mark in the picture people, vehicle boundary rectangle frame and set generic, by image and corresponding mark file preservation conduct Sample set.
5. a kind of people's vehicle separation method based on RFCN according to claim 3, it is characterised in that: in the step 202) Building the method based on the RFCN building deep learning network architecture includes: using ResNet-18 network as image characteristics extraction Basic network, the characteristic pattern that basic network exports is input to RFCN target detection frame, the final detection realized to target And identification;Wherein the size and ratio of anchors should meet the scale and proportional region of target in sample set as far as possible, be convenient for network Costing bio disturbance is carried out using the position (x, y, w, h) of prediction block, confidence level confidence, the position of true value and classification, is led to It crosses to iterate and calculates the loss reduction so that final.
6. a kind of people's vehicle separation method based on RFCN according to claim 3, it is characterised in that: in the step 203) Configuring the initial learning rate that training parameter is mainly setting network is 0.001, and total four times of training samples number of every progress The number of iterations, learning rate are reduced to original 0.5 times, and total the number of iterations is traditionally arranged to be ten times of total number of samples amount.Model training According to backpropagation thought, it is iterated training using stochastic gradient descent algorithm, the loss value of network is made to be down to lower value.
7. a kind of people's vehicle separation method based on RFCN according to claim 3, it is characterised in that: in the step 204) The detection to people, vehicle is realized with training pattern and output test result is mainly including the use of preparatory trained model to be detected Image is detected, and the testing result of model output includes the coordinate of boundary rectangle frame of target, classification belonging to target and right The confidence level answered.
8. a kind of people's vehicle separation method based on RFCN according to claim 3, it is characterised in that: right in the step 3) The method that testing result is post-processed mainly includes that a threshold value is arranged according to the value of the confidence of each target, removes the value of the confidence An IOU parameter is arranged in lower target, and the value of the confidence is lower in the removal higher target frame of degree of overlapping, these frames may be examined What is surveyed is the same target, only retains a highest detection block of the value of the confidence.According to the size of detection block, removal is not met The testing result of target sizes range in actual scene.
9. a kind of people's vehicle separation method based on RFCN according to claim 1, it is characterised in that: comprehensive in the step 4) It closes and analyzes continuous video frame images, mainly according to the position of the same target exported in the testing result of successive frame, judgement The state of the target is static or movement, and can be tracked to the motion profile of target.According to detecting same target Video frame number judge that target enters the time of scene, as judgement triggering warning reference conditions.
CN201811585378.7A 2018-12-24 2018-12-24 A kind of people's vehicle separation method based on RFCN Pending CN109711322A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201811585378.7A CN109711322A (en) 2018-12-24 2018-12-24 A kind of people's vehicle separation method based on RFCN

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201811585378.7A CN109711322A (en) 2018-12-24 2018-12-24 A kind of people's vehicle separation method based on RFCN

Publications (1)

Publication Number Publication Date
CN109711322A true CN109711322A (en) 2019-05-03

Family

ID=66257293

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201811585378.7A Pending CN109711322A (en) 2018-12-24 2018-12-24 A kind of people's vehicle separation method based on RFCN

Country Status (1)

Country Link
CN (1) CN109711322A (en)

Cited By (12)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110519485A (en) * 2019-09-09 2019-11-29 Oppo广东移动通信有限公司 Image processing method, device, storage medium and electronic equipment
CN110633680A (en) * 2019-09-19 2019-12-31 天津天地伟业机器人技术有限公司 Video-based people number abnormity detection method
CN110688924A (en) * 2019-09-19 2020-01-14 天津天地伟业机器人技术有限公司 RFCN-based vertical monocular passenger flow volume statistical method
CN110866512A (en) * 2019-11-21 2020-03-06 南京大学 Monitoring camera shielding detection method based on video classification
CN111046797A (en) * 2019-12-12 2020-04-21 天地伟业技术有限公司 Oil pipeline warning method based on personnel and vehicle behavior analysis
CN111046822A (en) * 2019-12-19 2020-04-21 山东财经大学 Large vehicle anti-theft method based on artificial intelligence video identification
CN111898475A (en) * 2020-07-10 2020-11-06 浙江大华技术股份有限公司 Method and device for estimating state of non-motor vehicle, storage medium, and electronic device
CN111931654A (en) * 2020-08-11 2020-11-13 精英数智科技股份有限公司 Intelligent monitoring method, system and device for personnel tracking
CN111950517A (en) * 2020-08-26 2020-11-17 司马大大(北京)智能系统有限公司 Target detection method, model training method, electronic device and storage medium
CN112183558A (en) * 2020-09-30 2021-01-05 北京理工大学 Target detection and feature extraction integrated network based on YOLOv3
CN112465691A (en) * 2020-11-25 2021-03-09 北京旷视科技有限公司 Image processing method, image processing device, electronic equipment and computer readable medium
CN113554008A (en) * 2021-09-18 2021-10-26 深圳市安软慧视科技有限公司 Method and device for detecting static object in area, electronic equipment and storage medium

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108062349A (en) * 2017-10-31 2018-05-22 深圳大学 Video frequency monitoring method and system based on video structural data and deep learning
CN108197613A (en) * 2018-02-12 2018-06-22 天津天地伟业信息系统集成有限公司 A kind of Face datection optimization algorithm based on depth convolution cascade network
CN108229407A (en) * 2018-01-11 2018-06-29 武汉米人科技有限公司 A kind of behavioral value method and system in video analysis
CN108427920A (en) * 2018-02-26 2018-08-21 杭州电子科技大学 A kind of land and sea border defense object detection method based on deep learning
CN108875595A (en) * 2018-05-29 2018-11-23 重庆大学 A kind of Driving Scene object detection method merged based on deep learning and multilayer feature

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108062349A (en) * 2017-10-31 2018-05-22 深圳大学 Video frequency monitoring method and system based on video structural data and deep learning
CN108229407A (en) * 2018-01-11 2018-06-29 武汉米人科技有限公司 A kind of behavioral value method and system in video analysis
CN108197613A (en) * 2018-02-12 2018-06-22 天津天地伟业信息系统集成有限公司 A kind of Face datection optimization algorithm based on depth convolution cascade network
CN108427920A (en) * 2018-02-26 2018-08-21 杭州电子科技大学 A kind of land and sea border defense object detection method based on deep learning
CN108875595A (en) * 2018-05-29 2018-11-23 重庆大学 A kind of Driving Scene object detection method merged based on deep learning and multilayer feature

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
FENG ZHANG 等: "Detection of Road Surface Identifiers Based on Deep Learning", 《2018 3RD INTERNATIONAL CONFERENCE ON COMPUTER SCIENCE AND INFORMATION ENGINEERING》 *
李超凡等: "基于学习算法SSD的实时道路拥堵检测", 《软件导刊》 *

Cited By (14)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110519485B (en) * 2019-09-09 2021-08-31 Oppo广东移动通信有限公司 Image processing method, image processing device, storage medium and electronic equipment
CN110519485A (en) * 2019-09-09 2019-11-29 Oppo广东移动通信有限公司 Image processing method, device, storage medium and electronic equipment
CN110633680A (en) * 2019-09-19 2019-12-31 天津天地伟业机器人技术有限公司 Video-based people number abnormity detection method
CN110688924A (en) * 2019-09-19 2020-01-14 天津天地伟业机器人技术有限公司 RFCN-based vertical monocular passenger flow volume statistical method
CN110866512A (en) * 2019-11-21 2020-03-06 南京大学 Monitoring camera shielding detection method based on video classification
CN111046797A (en) * 2019-12-12 2020-04-21 天地伟业技术有限公司 Oil pipeline warning method based on personnel and vehicle behavior analysis
CN111046822A (en) * 2019-12-19 2020-04-21 山东财经大学 Large vehicle anti-theft method based on artificial intelligence video identification
CN111898475A (en) * 2020-07-10 2020-11-06 浙江大华技术股份有限公司 Method and device for estimating state of non-motor vehicle, storage medium, and electronic device
CN111931654A (en) * 2020-08-11 2020-11-13 精英数智科技股份有限公司 Intelligent monitoring method, system and device for personnel tracking
CN111950517A (en) * 2020-08-26 2020-11-17 司马大大(北京)智能系统有限公司 Target detection method, model training method, electronic device and storage medium
CN112183558A (en) * 2020-09-30 2021-01-05 北京理工大学 Target detection and feature extraction integrated network based on YOLOv3
CN112465691A (en) * 2020-11-25 2021-03-09 北京旷视科技有限公司 Image processing method, image processing device, electronic equipment and computer readable medium
CN113554008A (en) * 2021-09-18 2021-10-26 深圳市安软慧视科技有限公司 Method and device for detecting static object in area, electronic equipment and storage medium
CN113554008B (en) * 2021-09-18 2021-12-31 深圳市安软慧视科技有限公司 Method and device for detecting static object in area, electronic equipment and storage medium

Similar Documents

Publication Publication Date Title
CN109711322A (en) A kind of people's vehicle separation method based on RFCN
CN113255481B (en) Crowd state detection method based on unmanned patrol car
CN110135269B (en) Fire image detection method based on mixed color model and neural network
CN104378582B (en) A kind of intelligent video analysis system and method cruised based on Pan/Tilt/Zoom camera
CN102542289B (en) Pedestrian volume statistical method based on plurality of Gaussian counting models
CN105678811B (en) A kind of human body anomaly detection method based on motion detection
CN108596128B (en) Object recognition method, device and storage medium
CN100545867C (en) Aerial shooting traffic video frequency vehicle rapid checking method
WO2017190574A1 (en) Fast pedestrian detection method based on aggregation channel features
CN111325713A (en) Wood defect detection method, system and storage medium based on neural network
CN104166841A (en) Rapid detection identification method for specified pedestrian or vehicle in video monitoring network
CN105243356B (en) A kind of method and device that establishing pedestrian detection model and pedestrian detection method
CN105893946A (en) Front face image detection method
CN109918971A (en) Number detection method and device in monitor video
CN106778540B (en) Parking detection is accurately based on the parking event detecting method of background double layer
CN112183472A (en) Method for detecting whether test field personnel wear work clothes or not based on improved RetinaNet
CN110334703B (en) Ship detection and identification method in day and night image
CN113192038B (en) Method for recognizing and monitoring abnormal smoke and fire in existing flame environment based on deep learning
CN109815863A (en) Firework detecting method and system based on deep learning and image recognition
CN104463869A (en) Video flame image composite recognition method
CN107578021A (en) Pedestrian detection method, apparatus and system based on deep learning network
Malhi et al. Vision based intelligent traffic management system
CN110255318B (en) Method for detecting idle articles in elevator car based on image semantic segmentation
CN114049624A (en) Intelligent detection method and system for ship cabin based on machine vision
Tao et al. Smoky vehicle detection based on range filtering on three orthogonal planes and motion orientation histogram

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
RJ01 Rejection of invention patent application after publication

Application publication date: 20190503

RJ01 Rejection of invention patent application after publication