[go: up one dir, main page]
More Web Proxy on the site http://driver.im/

CN107396094A - The automatic testing method of single camera damage towards in multi-cam monitoring system - Google Patents

The automatic testing method of single camera damage towards in multi-cam monitoring system Download PDF

Info

Publication number
CN107396094A
CN107396094A CN201710704123.7A CN201710704123A CN107396094A CN 107396094 A CN107396094 A CN 107396094A CN 201710704123 A CN201710704123 A CN 201710704123A CN 107396094 A CN107396094 A CN 107396094A
Authority
CN
China
Prior art keywords
camera
network
image
probability
file
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201710704123.7A
Other languages
Chinese (zh)
Other versions
CN107396094B (en
Inventor
袁泽峰
李恒宇
饶进军
丁长权
谢少荣
罗均
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
University of Shanghai for Science and Technology
Original Assignee
University of Shanghai for Science and Technology
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by University of Shanghai for Science and Technology filed Critical University of Shanghai for Science and Technology
Priority to CN201710704123.7A priority Critical patent/CN107396094B/en
Publication of CN107396094A publication Critical patent/CN107396094A/en
Application granted granted Critical
Publication of CN107396094B publication Critical patent/CN107396094B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N17/00Diagnosis, testing or measuring for television systems or their details
    • H04N17/002Diagnosis, testing or measuring for television systems or their details for television cameras
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N7/00Television systems
    • H04N7/18Closed-circuit television [CCTV] systems, i.e. systems in which the video signal is not broadcast
    • H04N7/181Closed-circuit television [CCTV] systems, i.e. systems in which the video signal is not broadcast for receiving images from a plurality of remote sources

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Health & Medical Sciences (AREA)
  • Biomedical Technology (AREA)
  • General Health & Medical Sciences (AREA)
  • Computational Linguistics (AREA)
  • Molecular Biology (AREA)
  • Artificial Intelligence (AREA)
  • Biophysics (AREA)
  • Signal Processing (AREA)
  • Data Mining & Analysis (AREA)
  • Evolutionary Computation (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Computing Systems (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Mathematical Physics (AREA)
  • Software Systems (AREA)
  • Multimedia (AREA)
  • Image Analysis (AREA)

Abstract

The present invention relates to a kind of automatic testing method of single camera damage in monitoring system towards multi-cam, comprise the following steps that:(1) gather, make training test sample collection;(2) constructing neural network;(3) neural network parameter is trained, network error is observed, until network convergence;(4) the network design application trained is used, and is monitored in real time.The method that the present invention uses is based on convolutional neural networks, and convolutional neural networks are developed recentlies, causes a kind of efficient identification method paid attention to extensively.Raw video image passes through the processing of the series of features such as convolutional layer, pond layer, active coating, the classification of final output image.The inventive method can substitute manual operation completely, have detection it is automatic, in real time, efficiently, precision it is accurate, it is not necessary to additional hardware, the features such as cost is low.

Description

The automatic testing method of single camera damage towards in multi-cam monitoring system
Technical field
The present invention relates to a kind of automatic testing method of single camera damage in monitoring system towards multi-cam, belong to Field of video monitoring.
Background technology
Video monitoring system is the physical basis monitored in real time to every profession and trade key sector or important place, management department Door can obtain valid data image information by it, and the process of paroxysmal abnormality event timely monitor and remember, used Efficiently, in time commanded with offer and quickly arrange police strength, settled a case.The video monitoring system of multi-cam composition, it is real Show the dual-use function of monitoring and communication, meet the monitoring of the every field such as traffic, water conservancy, oil field, bank, telecommunications with answering comprehensively Anxious commander's demand.Reaching the demand just necessarily requires each camera in monitoring system part must normal work Make.Because monitoring system is usually round-the-clock, all the period of time, twenty four hours is monitored incessantly, so just necessarily requiring System energy when some camera has damage to occur is automatic, real time notification system maintenance person carries out maintenance replacing.
In existing video monitoring system, the maintenance and monitoring of camera rely primarily on manually, it is impossible to realize prison in real time Control, and which camera of determination damage that can not be promptly and accurately is wasted time and energy, efficiency is low, it is necessary to progressively investigate.
The content of the invention
The defects of existing for prior art, it is an object of the invention to provide one kind towards single in multi-cam monitoring system The automatic testing method that individual camera damages, with detection is automatic, real-time, efficient, precision is accurate, it is not necessary to additional hardware, into The features such as this is low.
The technical solution adopted for the present invention to solve the technical problems is:
The automatic testing method of single camera damage, is comprised the following steps that in a kind of monitoring system towards multi-cam:
(1) gather, make training test sample collection;
(1a) gathers original image:The image each n in m CCTV camera is gathered simultaneously, wherein m, n are positive integer, 300 × 300 sizes are zoomed to, it is corresponding respectively to be placed in m file, wherein n pictures are according to collection in each file Time order and function order, numbered from 1 to n;
(1b) makes damage image:To numbering n since numbering 1, whether this will be damaged with 50% probability selection every time Numbering picture;If selection is, a file is randomly choosed from m file, the selected probability of each file is 1/m;Increasing randomly shaped, the solid-color image block of random color, the image block on the numbering picture in the file chosen Area accounts for more than the 30% of the picture area, represents video camera damage;Every image all generates text of the same name, in file Hold for picture label, if picture damage label is 1, do not damage label as 0;
(1c) training stage, m images are sequentially inputted according to m file, in articulamentum, m 300 × 300 × 3 300 × 300 × (3 × m) sizes are merged on passage, and it is 1 that label, which is arranged to numbering image label in file, Folder tabs.Such as:If m images of certain numbering, the image label of file 1 is 1, the numbering in remaining m-1 file The label of image is 0, then the data label after merging is arranged to 1, represents first kind sample;If should in m file Numbering image label is 0, then label is 0 after merging, and represents null class sample.
(2) constructing neural network;
The input of network is by the realtime graphic frame of m video camera of permanent order input, wherein every image is scaled To 300 × 300 × 3 sizes, i.e. 300 pixels are wide, 300 pixel height, 3 passages of every cromogram;Ensuing articulamentum M camera image is merged into 300 × 300 × (3 × m) sizes on passage;M+1 value finally is exported in full articulamentum, point Not Biao Shi m+1 class probability size, if the maximum probability of null class, then it represents that m camera all normal works;If the first kind Maximum probability, represent camera 1 work it is abnormal;If the maximum probability of the second class, it is abnormal to represent that camera 2 works;If The maximum probability of 3rd class, it is abnormal to represent that camera 3 works;If the maximum probability of m classes, represent that camera m work is abnormal.
(3) neural network parameter is trained, network error is observed, until network convergence;
Sample is inputted network and grasped after articulamentum merging image, generation sample label, then by neutral net by (3a) Make, in full articulamentum, according to equation below, export m+1 value, represent the probability of m+1 classes;
Zi=Wi*X+b
Wherein, X is the network output matrix of full connection output layer last layer, WiIt is complete i-th of output unit of articulamentum Weight matrix, b are the preambles of full articulamentum, ZiRefer to the output valve of i-th of output unit, i has m+1 value altogether from 0~m;Point The class probability of m+1 classes is not represented;
All kinds of probability output values of full articulamentum are input to SoftmaxWithLoss layers by (3b), according to equation below meter The penalty values of network, and the parameter of backpropagation training network are calculated,
Wherein, k represents classification, and y represents the sample label, ZiRefer to the probable value of i-th of sample.
(4) the network design application trained is used, and is monitored in real time;
The network more than deployment trained, the input of network are the realtime graphics by m video camera of permanent order input Frame, wherein every image is scaled to 300 × 300 × 3 sizes, i.e. 300 pixels are wide, 300 pixel height, every cromogram 3 Individual passage;M camera image is merged into 300 × 300 × (3 × m) sizes by ensuing articulamentum on passage;Rolled up again Lamination, pond layer, the operation of active coating;M+1 value finally is exported in full articulamentum, represents the size of m+1 class probability respectively, such as The maximum probability of fruit null class, then it represents that m camera all normal works;If the maximum probability of the first kind, represent that camera 1 works It is abnormal;If the maximum probability of the second class, it is abnormal to represent that camera 2 works;If the maximum probability of the 3rd class, camera is represented 3 work are abnormal;If the maximum probability of m classes, represent that camera m work is abnormal;With this, multi-cam monitoring system is realized In the damage of single camera automatic detection.
Compared with prior art, the present invention has advantages below:
The method that the present invention uses is based on convolutional neural networks, and convolutional neural networks are developed recentlies, causes wide A kind of efficient identification method of general attention.Raw video image passes through at the series of features such as convolutional layer, pond layer, active coating Reason, the classification of final output image.The inventive method can substitute manual operation completely, have detection it is automatic, in real time, efficiently, Precision is accurate, it is not necessary to additional hardware, the features such as cost is low.
Brief description of the drawings
Fig. 1 is the workflow diagram of the inventive method embodiment.
The schematic diagram of Fig. 2 the inventive method training sample and its class label.
Fig. 3 is the neural network structure figure of the inventive method phrase of operation and implementation.
Embodiment
Below in conjunction with the accompanying drawings, the specific embodiment of the present invention is described further.
Reference picture 1, the embodiment of the present invention is described in further detail with m=4, n=20000:
The automatic testing method of single camera damage, is comprised the following steps that in a kind of monitoring system towards multi-cam:
Step 1, gather, make training test sample collection:
1st step, gather original image:Each 20000 of image in four CCTV cameras is gathered simultaneously, zooms to 300 × 300 sizes, it is corresponding respectively to be placed in four files, wherein 20000 pictures are first according to acquisition time in each file Order afterwards, numbered from 00001 to 20000.
2nd step, make damage image:To numbering 20000 since numbering 00001, it is with 50% probability selection every time It is no to damage the numbering picture;If selection is, a file is randomly choosed from four files, each file is chosen In probability be 25%;Increasing randomly shaped, the solid-color image of random color on the numbering picture in the file chosen Block, the image block area account for more than the 30% of the picture area.Represent video camera damage.Every image all generates text of the same name File, file content are picture label, if picture damage label is 1, do not damage label as 0.
3rd step, the training stage, four images are sequentially inputted according to four files, in articulamentum, four 300 × 300 × 3 are merged into 300 × 300 × 12 sizes on passage, and it is 1 that label, which is arranged to numbering image label in file, Folder tabs.Such as:If four images of certain numbering, the image label of file 1 is 1, the numbering in its excess-three file The label of image is 0, then the data label after merging is arranged to 1, represents first kind sample;If should in four files Numbering image label is 0, then label is 0 after merging, and represents null class sample;Fig. 2 summarizes this method classification and label shows Meaning:Classification 0 represents four cameras all normal works;Classification 1 represents first camera damage in four cameras;Classification 2 Represent second camera damage in four cameras;Classification 3 represents the 3rd camera damage in four cameras;Classification 4 Represent the 4th camera damage in four cameras.
Step 2, neutral net used in this method is constructed:
Fig. 3 is this method schematic network structure, and its feature is:The input of network is four by permanent order input The realtime graphic frame of video camera, wherein every image is scaled to 300 × 300 × 3 sizes, i.e. 300 pixels are wide, 300 pictures It is plain high, 3 passages of every cromogram;Four camera images are merged into 300 × 300 × 12 by ensuing articulamentum on passage Size;5 values finally are exported in full articulamentum, represent the size of five class probability, if the maximum probability of null class, table respectively Show four cameras all normal works;If the maximum probability of the first kind, it is abnormal to represent that camera 1 works;If the second class is general Rate is maximum, and it is abnormal to represent that camera 2 works;If the maximum probability of the 3rd class, it is abnormal to represent that camera 3 works;If the 4th The maximum probability of class, it is abnormal to represent that camera 4 works.
Step 3, with sample training network:
1st step, sample is inputted into network and merges image in articulamentum, after generating sample label, then by neutral net Operation, in full articulamentum, according to equation below, 5 values are exported, represent the probability of five classes;
Zi=Wi*X+b
Wherein, X is the network output matrix of full connection output layer last layer, WiIt is complete i-th of output unit of articulamentum Weight matrix, b are the preambles of full articulamentum, ZiRefer to the output valve of i-th of output unit.I has 5 values altogether from 0~4.Respectively Represent the class probability of five classes.
2nd step, all kinds of probability output values of full articulamentum are input to SoftmaxWithLoss layers, according to equation below The penalty values of calculating network, and the parameter of backpropagation training network, network error declines always during practicing, and measuring accuracy is not Disconnected to improve, when network convergence, training is completed, and obtains one group of high network parameter of accuracy of detection.
Wherein, k represents classification, and y represents the sample label, ZiRefer to the probable value of i-th of sample.
Step 4, with the network design application trained:
The network more than deployment trained, the input of network are the realtime graphics by four video cameras of permanent order input Frame, wherein every image is scaled to 300 × 300 × 3 sizes, i.e. 300 pixels are wide, 300 pixel height, every cromogram 3 Individual passage;Four camera images are merged into 300 × 300 × 12 sizes by ensuing articulamentum on passage;Convolution is carried out again The operation such as layer, pond layer, active coating;5 values finally are exported in full articulamentum, represent the size of five class probability respectively, if the The maximum probability of null class, then it represents that four cameras all normal works;If the maximum probability of the first kind, represent that camera 1 works not Normally;If the maximum probability of the second class, it is abnormal to represent that camera 2 works;If the maximum probability of the 3rd class, camera 3 is represented Work abnormal;If the maximum probability of the 4th class, it is abnormal to represent that camera 4 works.With this, multi-cam monitoring system is realized In the damage of single camera automatic detection.

Claims (5)

1. the automatic testing method of single camera damage in a kind of monitoring system towards multi-cam, it is characterised in that specific Step is as follows:
(1) gather, make training test sample collection;
(2) constructing neural network;
(3) neural network parameter is trained, network error is observed, until network convergence;
(4) the network design application trained is used, and is monitored in real time.
2. the automatic testing method of single camera damage in the monitoring system according to claim 1 towards multi-cam, Characterized in that, step (1) comprises the following steps that:
(1a) gathers original image:The image each n in m CCTV camera is gathered simultaneously, and wherein m, n are positive integer, scaling It is corresponding respectively to be placed in m file to 300 × 300 sizes, wherein n pictures are according to acquisition time in each file Sequencing, numbered from 1 to n;
(1b) makes damage image:To numbering n since numbering 1, whether the numbering will be damaged with 50% probability selection every time Picture;If selection is, a file is randomly choosed from m file, the selected probability of each file is 1/m; Increasing randomly shaped, the solid-color image block of random color, the image block area on the numbering picture in the file chosen More than the 30% of the picture area is accounted for, represents video camera damage;Every image all generates text of the same name, and file content is Picture label, if picture damage label is 1, label is not damaged as 0;
(1c) training stage, m images are sequentially inputted according to m file, in articulamentum, m is opened 300 × 300 × 3 in passage On be merged into 300 × 300 × (3 × m) sizes, and label is arranged to the file that numbering image label in file is 1 Press from both sides label.
3. the automatic testing method of single camera damage in the monitoring system according to claim 1 towards multi-cam, Characterized in that, step (2) comprise the following steps that:The input of network is the m video camera inputted by permanent order Realtime graphic frame, wherein every image is scaled to 300 × 300 × 3 sizes, i.e. 300 pixels are wide, 300 pixel height, every 3 passages of cromogram;M camera image is merged into 300 × 300 × (3 × m) sizes by ensuing articulamentum on passage; M+1 value finally is exported in full articulamentum, the size of m+1 class probability is represented respectively, if the maximum probability of null class, then it represents that M camera all normal works;If the maximum probability of the first kind, it is abnormal to represent that camera 1 works;If the probability of the second class is most Greatly, expression camera 2 works abnormal;If the maximum probability of the 3rd class, it is abnormal to represent that camera 3 works;If m classes is general Rate is maximum, represents that camera m work is abnormal.
4. the automatic testing method of single camera damage in the monitoring system according to claim 1 towards multi-cam, Characterized in that, step (3) comprise the following steps that:
Sample is inputted network and operated after articulamentum merging image, generation sample label, then by neutral net by (3a), Full articulamentum, according to equation below, m+1 value is exported, represents the probability of m+1 classes;
Zi=Wi*X+b
Wherein, X is the network output matrix of full connection output layer last layer, WiIt is the weights square of complete i-th of output unit of articulamentum Battle array, b are the preambles of full articulamentum, ZiRefer to the output valve of i-th of output unit, i has m+1 value altogether from 0~m;M is represented respectively The class probability of+1 class;
All kinds of probability output values of full articulamentum are input to SoftmaxWithLoss layers by (3b), and net is calculated according to equation below The penalty values of network, and the parameter of backpropagation training network,
<mrow> <mi>J</mi> <mrow> <mo>(</mo> <mi>&amp;theta;</mi> <mo>)</mo> </mrow> <mo>=</mo> <mo>-</mo> <mo>&amp;lsqb;</mo> <munderover> <mo>&amp;Sigma;</mo> <mrow> <mi>j</mi> <mo>=</mo> <mn>0</mn> </mrow> <mi>k</mi> </munderover> <mn>1</mn> <mo>{</mo> <mi>y</mi> <mo>=</mo> <mi>j</mi> <mo>}</mo> <mi>l</mi> <mi>o</mi> <mi>g</mi> <mfrac> <msup> <mi>Z</mi> <mi>i</mi> </msup> <mrow> <munderover> <mo>&amp;Sigma;</mo> <mrow> <mi>i</mi> <mo>=</mo> <mn>0</mn> </mrow> <mi>k</mi> </munderover> <msup> <mi>Z</mi> <mi>i</mi> </msup> </mrow> </mfrac> <mo>&amp;rsqb;</mo> </mrow>
Wherein, k represents classification, and y represents the sample label, ZiRefer to the probable value of i-th of sample.
5. the automatic testing method of single camera damage in the monitoring system according to claim 1 towards multi-cam, Characterized in that, step (4) comprise the following steps that:The network more than deployment trained, the input of network is by fixation The realtime graphic frame of the m video camera sequentially inputted, wherein every image is scaled to 300 × 300 × 3 sizes, i.e. 300 pictures It is plain wide, 300 pixel height, 3 passages of every cromogram;M camera image is merged into by ensuing articulamentum on passage 300 × 300 × (3 × m) sizes;Convolutional layer, pond layer, the operation of active coating are carried out again;Finally in full articulamentum output m+1 Value, the size of m+1 class probability is represented respectively, if the maximum probability of null class, then it represents that m camera all normal works;If The maximum probability of the first kind, it is abnormal to represent that camera 1 works;If the maximum probability of the second class, represent that camera 2 works not just Often;If the maximum probability of the 3rd class, it is abnormal to represent that camera 3 works;If the maximum probability of m classes, camera m work is represented It is abnormal;With this, the automatic detection that single camera damages in multi-cam monitoring system is realized.
CN201710704123.7A 2017-08-17 2017-08-17 Automatic testing method towards camera single in multi-cam monitoring system damage Active CN107396094B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201710704123.7A CN107396094B (en) 2017-08-17 2017-08-17 Automatic testing method towards camera single in multi-cam monitoring system damage

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201710704123.7A CN107396094B (en) 2017-08-17 2017-08-17 Automatic testing method towards camera single in multi-cam monitoring system damage

Publications (2)

Publication Number Publication Date
CN107396094A true CN107396094A (en) 2017-11-24
CN107396094B CN107396094B (en) 2019-02-22

Family

ID=60353113

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201710704123.7A Active CN107396094B (en) 2017-08-17 2017-08-17 Automatic testing method towards camera single in multi-cam monitoring system damage

Country Status (1)

Country Link
CN (1) CN107396094B (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109063761A (en) * 2018-07-20 2018-12-21 北京旷视科技有限公司 Diffuser dropping detection method, device and electronic equipment
CN110868586A (en) * 2019-11-08 2020-03-06 北京转转精神科技有限责任公司 Automatic detection method for defects of camera

Citations (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20020072874A1 (en) * 2000-11-22 2002-06-13 Bernd Michaelis Method of detecting flaws in the structure of a surface
CN101282481A (en) * 2008-05-09 2008-10-08 中国传媒大学 Method for evaluating video quality based on artificial neural net
US7457458B1 (en) * 1999-11-26 2008-11-25 Inb Vision Ag. Method and apparatus for defining and correcting image data
CN102098530A (en) * 2010-12-02 2011-06-15 惠州Tcl移动通信有限公司 Method and device for automatically distinguishing quality of camera module
US20160379352A1 (en) * 2015-06-24 2016-12-29 Samsung Electronics Co., Ltd. Label-free non-reference image quality assessment via deep neural network
CN106650919A (en) * 2016-12-23 2017-05-10 国家电网公司信息通信分公司 Information system fault diagnosis method and device based on convolutional neural network
CN106650932A (en) * 2016-12-23 2017-05-10 郑州云海信息技术有限公司 Intelligent fault classification method and device for data center monitoring system
CN106686377A (en) * 2016-12-30 2017-05-17 佳都新太科技股份有限公司 Algorithm for determining video key area based on deep neural network
CN106709511A (en) * 2016-12-08 2017-05-24 华中师范大学 Urban rail transit panoramic monitoring video fault detection method based on depth learning
CN106991668A (en) * 2017-03-09 2017-07-28 南京邮电大学 A kind of evaluation method of day net camera shooting picture

Patent Citations (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US7457458B1 (en) * 1999-11-26 2008-11-25 Inb Vision Ag. Method and apparatus for defining and correcting image data
US20020072874A1 (en) * 2000-11-22 2002-06-13 Bernd Michaelis Method of detecting flaws in the structure of a surface
CN101282481A (en) * 2008-05-09 2008-10-08 中国传媒大学 Method for evaluating video quality based on artificial neural net
CN102098530A (en) * 2010-12-02 2011-06-15 惠州Tcl移动通信有限公司 Method and device for automatically distinguishing quality of camera module
US20160379352A1 (en) * 2015-06-24 2016-12-29 Samsung Electronics Co., Ltd. Label-free non-reference image quality assessment via deep neural network
CN106709511A (en) * 2016-12-08 2017-05-24 华中师范大学 Urban rail transit panoramic monitoring video fault detection method based on depth learning
CN106650919A (en) * 2016-12-23 2017-05-10 国家电网公司信息通信分公司 Information system fault diagnosis method and device based on convolutional neural network
CN106650932A (en) * 2016-12-23 2017-05-10 郑州云海信息技术有限公司 Intelligent fault classification method and device for data center monitoring system
CN106686377A (en) * 2016-12-30 2017-05-17 佳都新太科技股份有限公司 Algorithm for determining video key area based on deep neural network
CN106991668A (en) * 2017-03-09 2017-07-28 南京邮电大学 A kind of evaluation method of day net camera shooting picture

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
孙启良: "京沪高铁综合视频监控图像质量诊断技术研究与实现", 《中国优秀硕士学位论文全文数据库(电子期刊)信息科技辑》 *

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109063761A (en) * 2018-07-20 2018-12-21 北京旷视科技有限公司 Diffuser dropping detection method, device and electronic equipment
CN109063761B (en) * 2018-07-20 2020-11-03 北京旷视科技有限公司 Diffuser falling detection method and device and electronic equipment
CN110868586A (en) * 2019-11-08 2020-03-06 北京转转精神科技有限责任公司 Automatic detection method for defects of camera

Also Published As

Publication number Publication date
CN107396094B (en) 2019-02-22

Similar Documents

Publication Publication Date Title
CN107316007B (en) Monitoring image multi-class object detection and identification method based on deep learning
CN107169426B (en) Crowd emotion abnormality detection and positioning method based on deep neural network
CN110766011B (en) Contact net nut abnormity identification method based on deep multistage optimization
CN106790019A (en) The encryption method for recognizing flux and device of feature based self study
CN107194396A (en) Method for early warning is recognized based on the specific architecture against regulations in land resources video monitoring system
CN116256586B (en) Overheat detection method and device for power equipment, electronic equipment and storage medium
CN110346699A (en) Insulator arc-over information extracting method and device based on ultraviolet image processing technique
CN110516517A (en) A kind of target identification method based on multiple image, device and equipment
CN109949209A (en) A kind of rope detection and minimizing technology based on deep learning
CN106934319A (en) People&#39;s car objective classification method in monitor video based on convolutional neural networks
CN112396635A (en) Multi-target detection method based on multiple devices in complex environment
CN107396094A (en) The automatic testing method of single camera damage towards in multi-cam monitoring system
CN114298144A (en) Hydropower station equipment oil leakage identification method based on deep convolutional neural network
CN114119518A (en) Method and system for detecting temperature abnormal point in infrared image of current transformer
CN112861646A (en) Cascade detection method for oil unloading worker safety helmet in complex environment small target recognition scene
CN116416237A (en) Power transmission line defect detection method based on improved YOLOv5 and fuzzy image enhancement
CN114881665A (en) Method and system for identifying electricity stealing suspected user based on target identification algorithm
CN110866453A (en) Real-time crowd stable state identification method and device based on convolutional neural network
CN117113066B (en) Transmission line insulator defect detection method based on computer vision
CN116503398B (en) Insulator pollution flashover detection method and device, electronic equipment and storage medium
CN108761263A (en) A kind of fault diagnosis system based on evidence theory
CN115661117A (en) Contact net insulator visible light image detection method
CN116883717A (en) Platen checking method and device, computer readable storage medium and computer equipment
CN116664401A (en) Method and system for improving infrared image reconstruction capability of power equipment
CN109063854A (en) Intelligent O&amp;M cloud platform system and its control method

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant