[go: up one dir, main page]
More Web Proxy on the site http://driver.im/

CN110516522B - Inspection method and system - Google Patents

Inspection method and system Download PDF

Info

Publication number
CN110516522B
CN110516522B CN201910463602.3A CN201910463602A CN110516522B CN 110516522 B CN110516522 B CN 110516522B CN 201910463602 A CN201910463602 A CN 201910463602A CN 110516522 B CN110516522 B CN 110516522B
Authority
CN
China
Prior art keywords
target image
image
shooting
area
feature
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201910463602.3A
Other languages
Chinese (zh)
Other versions
CN110516522A (en
Inventor
杨宇
赵涛
赵旭亮
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
China Comservice Enrising Information Technology Co Ltd
Original Assignee
China Comservice Enrising Information Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by China Comservice Enrising Information Technology Co Ltd filed Critical China Comservice Enrising Information Technology Co Ltd
Priority to CN201910463602.3A priority Critical patent/CN110516522B/en
Publication of CN110516522A publication Critical patent/CN110516522A/en
Application granted granted Critical
Publication of CN110516522B publication Critical patent/CN110516522B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/22Matching criteria, e.g. proximity measures
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T3/00Geometric image transformations in the plane of the image
    • G06T3/40Scaling of whole images or parts thereof, e.g. expanding or contracting
    • G06T3/4038Image mosaicing, e.g. composing plane images from plane sub-images
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/20Image preprocessing
    • G06V10/26Segmentation of patterns in the image field; Cutting or merging of image elements to establish the pattern region, e.g. clustering-based techniques; Detection of occlusion
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/40Scenes; Scene-specific elements in video content
    • G06V20/41Higher-level, semantic clustering, classification or understanding of video scenes, e.g. detection, labelling or Markovian modelling of sport events or news items
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/40Scenes; Scene-specific elements in video content
    • G06V20/46Extracting features or characteristics from the video content, e.g. video fingerprints, representative shots or key frames
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/50Context or environment of the image
    • G06V20/52Surveillance or monitoring of activities, e.g. for recognising suspicious objects
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V2201/00Indexing scheme relating to image or video recognition or understanding
    • G06V2201/07Target detection

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Multimedia (AREA)
  • Data Mining & Analysis (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Evolutionary Biology (AREA)
  • Evolutionary Computation (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • General Engineering & Computer Science (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Artificial Intelligence (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Computational Linguistics (AREA)
  • Software Systems (AREA)
  • Alarm Systems (AREA)
  • Image Processing (AREA)
  • Image Analysis (AREA)

Abstract

The embodiment of the invention discloses a routing inspection method, which comprises the following steps: determining a target image to be detected; extracting a first target image acquired at a first shooting point; performing feature extraction on the first target image, and determining a first feature of the first target image; comparing the first feature with an original feature of the first target image; if the first characteristic is different from the original characteristic, sending first alarm information to a host; therefore, the whole detection process is simpler by shooting at the first shooting point, the detection accuracy is improved, and the possibility of influence of other external environments is greatly reduced.

Description

Inspection method and system
Technical Field
The invention relates to the field of rail type inspection robots, in particular to an inspection method and an inspection system.
Background
With the increase of unattended demand, the rail-mounted mobile robot has gradually replaced the traditional manpower monitoring operation and maintenance, and is more and more applied to various industry fields. When the rail-mounted mobile robot is used for monitoring and maintaining a plant, the use state of equipment in the plant is generally shot through a camera, and the operation stability of the equipment and the reading of data are operated by judging the use state of the equipment shot on a picture.
At present, however, the accuracy of the inspection system of the rail-type mobile robot is generally low when the inspection system is used for monitoring, for example, when detecting whether the cabinet door of the electrical cabinet is in a closed state, the cabinet door of the electrical cabinet is often shot and detected at various angles, then the states of the cabinet door of the electrical cabinet in shooting states at various angles are compared respectively, and alarm information is sent to workers as long as one comparison result is unmatched, but in practice, due to environmental changes (for example, the false phenomenon that the cabinet door of the electrical cabinet is not closed in a specific shooting angle due to the fact that the cabinet doors of other electrical cabinets are not closed), an image shot by the inspection system of the rail-type mobile robot at a certain angle can be changed from a preset image, therefore, the inspection system of the rail-type mobile robot generates wrong judgment, and further influences the use.
Disclosure of Invention
In order to solve the technical problems, embodiments of the present invention provide a polling method and system, which solve the problems that in the prior art, when a device is monitored at multiple angles, the monitoring result is inaccurate, and therefore, the use of workers and the daily maintenance of a monitoring place are affected.
In order to achieve the purpose, the technical scheme of the embodiment of the invention is realized as follows:
the embodiment of the invention provides a routing inspection method, which comprises the following steps:
determining a target image to be detected;
extracting a first target image acquired at a first shooting point;
performing feature extraction on the first target image, and determining a first feature of the first target image;
comparing the first feature with an original feature of the first target image;
and if the first characteristic is different from the original characteristic, sending first alarm information to a host.
In an embodiment of the present invention, the determining a target image to be detected includes:
receiving a video sent by a terminal;
analyzing the video to obtain each frame of image in the video;
and determining a target image to be detected from all images contained in the video.
In an embodiment of the present invention, the method further comprises:
extracting an enlarged image of a first target image acquired at a first photographing point when the first feature is different from the original feature;
performing feature extraction on the amplified image, and determining the amplification feature of the amplified image;
comparing the amplified feature with an original amplified feature of the first target image;
and if the amplification characteristic is different from the original amplification characteristic, sending first alarm information to a host.
In an embodiment of the present invention, the method further comprises:
acquiring user information, and judging whether the authority of the user information is matched with the authority of the target image to be detected;
stopping sending alarm information when the authority of the user information is matched with the authority of the target image to be detected;
and when the authority of the user information is not matched with the authority of the target image to be detected, continuously sending first alarm information to the host.
In an embodiment of the present invention, the method further comprises:
extracting a second target image acquired at a second shooting point;
performing feature extraction on the second target image, and determining second features of the second target image;
sending the second characteristic to a host.
In an embodiment of the present invention, the method further comprises:
extracting a first thermal imaging image of the first target image acquired at a third shooting point;
dividing the first thermal imaging graph based on different color area distributions on the thermal imaging graph to obtain each target area, and reading a temperature value of each target area;
comparing the temperature value of each target area with the preset temperature threshold value respectively, and selecting a first high-temperature area with the temperature value higher than the preset temperature threshold value;
and sending the temperature value of the first high-temperature area to the host.
In the embodiment of the invention, after the set time, extracting a second thermal imaging image of the first target image acquired at a third shooting point;
dividing the second thermal imaging graph based on different color area distributions on the thermal imaging graph to obtain each target area, and reading the temperature value of each target area;
obtaining a second high-temperature area of which the temperature value in the second thermal imaging graph is higher than the preset temperature threshold;
and comparing the first high-temperature area with the second high-temperature area, judging the superposed part of the first high-temperature area and the second high-temperature area, and sending the temperature value of the superposed part as third alarm information to the host.
In an embodiment of the present invention, the method further comprises:
after receiving a panoramic shooting instruction, carrying out video shooting along a preset track to obtain a panoramic video;
decomposing the panoramic video into a plurality of image frames;
dividing the preset track into a plurality of shooting sections, and extracting an image frame corresponding to each shooting section;
selecting at least one picture in each shooting section as a spliced picture;
and splicing the spliced pictures to generate at least one panoramic image.
The embodiment of the invention also discloses an inspection system based on the rail type inspection robot, which comprises:
a confirmation module for determining a target image to be detected;
a first extraction unit configured to extract a first target image acquired at a first photographing point;
the second extraction unit is used for extracting the features of the first target image and determining the first features of the first target image;
the first comparison unit is used for comparing the first characteristic with the original characteristic of the first target image;
and the first sending unit is used for sending first alarm information to the host if the first characteristic is different from the original characteristic.
In an embodiment of the present invention, the system further includes:
the receiving unit is used for receiving the video sent by the terminal;
the analysis unit is used for analyzing the video to obtain each frame of image in the video;
and the determining unit is used for determining a target image to be detected from all images contained in the video.
In an embodiment of the present invention, the system further includes:
a third extraction unit configured to extract an enlarged image of the first target image acquired at the first photographing point;
a fourth extraction unit, configured to perform feature extraction on the enlarged image, and determine an enlarged feature of the enlarged image;
the second comparison unit is used for comparing the amplification characteristic with the original amplification characteristic of the first target image;
and the second sending unit is used for sending first alarm information to the host if the amplification characteristic is different from the original amplification characteristic.
In an embodiment of the present invention, the system further includes:
an acquisition unit configured to acquire user information;
and the judging unit is used for judging whether the authority of the user information is matched with the authority of the target image to be detected.
A third transmitting unit configured to stop transmitting alarm information when the authority of the user information matches the authority of the target image to be detected; and when the authority of the user information is not matched with the authority of the target image to be detected, continuously sending first alarm information to the host.
In an embodiment of the present invention, the system further includes:
a fourth extraction unit, configured to extract a first thermal imaging map of the first target image acquired at a third shooting point;
the segmentation reading unit is used for segmenting the first thermal imaging graph based on different color area distributions on the thermal imaging graph to obtain each target area, and reading the temperature value of each target area;
the comparison and selection unit is used for comparing the temperature value of each target area with the preset temperature threshold value respectively and selecting a first high-temperature area with the temperature value higher than the preset temperature threshold value;
and the third sending unit is used for sending the temperature value of the first high-temperature area to the host.
In an embodiment of the present invention, the system further includes:
the acquisition unit is used for carrying out video shooting along a preset track after receiving a panoramic shooting instruction to obtain a panoramic video;
a decomposition unit for decomposing the panoramic video into a plurality of image frames;
the fifth extraction module is used for dividing the preset track into a plurality of shooting sections and extracting the image frame corresponding to each shooting section;
the selecting unit is used for selecting at least one picture in each shooting section as a spliced picture;
and the generating module is used for splicing the spliced pictures to generate at least one panoramic image.
The embodiment of the invention discloses a routing inspection method, which comprises the following steps: determining a target image to be detected; extracting a first target image acquired at a first shooting point; performing feature extraction on the first target image, and determining a first feature of the first target image; comparing the first feature with an original feature of the first target image; if the first characteristic is different from the original characteristic, sending first alarm information to a host; the embodiment of the invention also discloses an inspection system based on the rail type inspection robot, which comprises: a confirmation module for determining a target image to be detected; a first extraction unit configured to extract a first target image acquired at a first photographing point; the second extraction unit is used for extracting the features of the first target image and determining the first features of the first target image; the first comparison unit is used for comparing the first characteristic with the original characteristic of the first target image; and the first sending unit is used for sending first alarm information to the host if the first characteristic is different from the original characteristic. The method comprises the steps of shooting target images, determining the optimal shooting point from the shot images, shooting the first target images at the first shooting point, extracting features of the part to be detected, determining the image of the first feature of the part to be detected, comparing the image of the first feature with a preset image of an original feature, continuing the operation of the whole rail type inspection robot if the comparison is successful, and sending first alarm information to a host if the comparison is unsuccessful. Therefore, the whole detection process is simpler by shooting at the first shooting point, the detection accuracy is improved, and the possibility of influence of other external environments is greatly reduced.
Drawings
Fig. 1 is a schematic flow chart illustrating an implementation of a track type inspection robot inspection detection method according to an embodiment of the present invention;
fig. 2 is a schematic view of an implementation flow of the inspection detection method for the rail-type inspection robot according to the third embodiment of the present invention;
fig. 3 is a schematic view of an implementation flow of the inspection detection method for the rail-type inspection robot according to the fourth embodiment of the present invention;
fig. 4 is a schematic view of an implementation flow of the inspection detection method for the rail-type inspection robot according to the fifth embodiment of the present invention;
fig. 5 is a schematic view of an implementation flow of the inspection detection method for the rail-type inspection robot according to the sixth embodiment of the present invention;
fig. 6 is a block diagram of an inspection system according to a seventh embodiment of the present invention;
fig. 7 is a schematic flow chart of a polling method according to a seventh embodiment of the present invention;
fig. 8 is a schematic structural diagram of the routing inspection system according to the eighth embodiment of the present invention.
Detailed Description
The technical solution in the embodiments of the present invention will be clearly and completely described below with reference to the accompanying drawings in the embodiments of the present invention.
Example one
In order to solve the technical problems in the background art, the embodiment of the invention provides an inspection method which is applied to an inspection detection system device of a rail type inspection robot. In practical application, the system device for the inspection and detection of the rail-type inspection robot can be a server with large data processing capacity, a computer client and the like. Fig. 1 is a schematic view of an implementation flow of a track type inspection robot inspection detection method according to an embodiment of the present invention, as shown in fig. 1, the method includes:
step S101: determining a target image to be detected;
here, first, a video transmitted by a terminal is received; secondly, analyzing the video to obtain each frame of image in the video; finally, the target image to be detected is determined from all the images contained in the video.
The terminal is a device with a shooting function, and includes but is not limited to a camera and the like.
And determining a target image to be detected from all images contained in the video, and intercepting the video image between two position points on the track according to the lens of the track type inspection robot when the target image is shot. Specifically, the track type inspection robot passes a first position point as a starting point X, the track type inspection robot passes a second position point as an end point Y, the interval time Z is determined, wherein Y is larger than X and is larger than 1, X, Y, Z is a natural number, and a frame of image is extracted from the X frame to the Y frame of all images contained in the video at intervals of Z; the extracted image is determined as a target image to be detected.
Step S102: extracting a first target image acquired at a first shooting point;
here, the first shooting point is a position point on the track between the first position point and the second position point on the track, and is an optimal shooting point for observing whether the cabinet door is closed, and the first shooting point may be one position point, or a plurality of position points, or a section of track or a plurality of sections of tracks.
When extracting the first target image acquired at the first photographing point, it is possible to calculate the displacement time T from the first one of the position points to the first photographing point1Then, according to the interval time Z, a first target image at the first shooting point is determined. For example, when the interval time Z is 1 second, the displacement time T1When the image time is 5 seconds, the acquired first target image at the first shooting point can be obtained and is the extracted 5 th image; when the interval time Z is 1 second, the displacement time T1And when the time is 5-7 seconds, the acquired first target image at the first shooting point can be acquired as the extracted 5 th, 6 th and 7 th images, and only one image needs to be selected randomly at the moment.
Step S103: performing feature extraction on the first target image, and determining a first feature of the first target image;
here, feature extraction is performed on the first target image through an image feature detection algorithm to determine a first feature in the image, where there may be one or multiple first features. The first characteristic includes, but is not limited to, color, shape, etc.
Step S104: comparing the first feature with an original feature of the first target image;
here, the original feature of the first target image refers to an original feature that is extracted from the feature of the first target image at the first photographing point when the apparatus is in a normal operation or use state (the cabinet door is closed). Comparing the original features of the first target image with the first features, and judging whether the original features are matched with the first features according to a comparison algorithm of feature similarity.
When image comparison is carried out, technicians can adopt an OpenCV perception hash algorithm to carry out image similarity comparison, when a mainstream frame OpenCV is adopted, an adopted method comprises a PSNR peak signal-to-noise ratio, and the method judges the similarity of the images based on errors among corresponding pixel points. The comparison may also be performed using a perceptual hash algorithm, in particular, a perceptual hash algorithm (perceptual hash algorithm) which functions to generate a "fingerprint" string for each image and then compare fingerprints of different images. The closer the results, the more similar the images are.
Step S105: and if the first characteristic is different from the original characteristic, sending first alarm information to a host.
Here, first alarm information is transmitted to the host when the first feature does not match the original feature. I.e. the first characteristic is different from the original characteristic, sending a first warning message occurs.
The host is an intelligent terminal with a visual interface, and comprises but is not limited to a smart phone, a tablet computer, a notebook computer, a desktop computer and the like. After the host receives the first alarm information, a worker can be prompted to check the first alarm information.
In the inspection method provided by the embodiment of the invention, a target image to be detected is determined; extracting a first target image acquired at a first shooting point; performing feature extraction on the first target image, and determining a first feature of the first target image; comparing the first feature with an original feature of the first target image; if the first characteristic is different from the original characteristic, first alarm information is sent to a host computer, then an alarm is given to a worker, and the worker shoots at the first shooting point located at the best shooting point, so that the whole detection process is simpler, the detection accuracy is improved, and the possibility of influence of other external environments is greatly reduced.
Example two
In order to further improve the accuracy of the judgment of the inspection system, the embodiment of the invention provides an inspection method on the basis of the first embodiment, and the inspection method is applied to the inspection detection system device of the rail type inspection robot. In practical application, the system device for the inspection and detection of the rail-type inspection robot can be a server with large data processing capacity, a computer client and the like. Fig. 1 is a schematic view of an implementation flow of a track type inspection robot inspection detection method according to an embodiment of the present invention, and as shown in fig. 1, the method further includes:
step S201: extracting an enlarged image of a first target image acquired at a first photographing point when the first feature is different from the original feature;
here, when the first feature is different from the original feature, the step S201 further includes driving the terminal to photograph at the first photographing point on the track. And during shooting, adjusting the focal length of the terminal and enabling the focal point of the lens to be aligned to the target to be detected, so as to obtain an amplified video of the first target image acquired at the first shooting point. And analyzing the amplified video to obtain each frame of amplified image in the amplified video, and extracting one frame of image as the amplified image of the first target image acquired at the first shooting point.
Step S202: performing feature extraction on the amplified image, and determining the amplification feature of the amplified image;
step S203: comparing the amplified feature with an original amplified feature of the first target image;
step S204: and if the amplification characteristic is different from the original amplification characteristic, sending first alarm information to a host.
In the inspection method provided by the embodiment of the invention, when the first characteristic is different from the original characteristic, an enlarged image of a first target image acquired at a first shooting point is extracted; performing feature extraction on the amplified image, and determining the amplification feature of the amplified image; comparing the amplified feature with an original amplified feature of the first target image; and if the amplification characteristic is different from the original amplification characteristic, sending first alarm information to a host computer, and then giving an alarm to a worker. Therefore, the detection accuracy of the inspection system is further improved by re-detecting the detected abnormal position and carrying out feature amplification shooting on the abnormal position.
EXAMPLE III
When workers enter a factory building to carry out maintenance on the electrical cabinet and the cabinet door is opened, in order to avoid the inspection robot from detecting false alarm caused by the fact that the cabinet door is not closed, the accuracy of judgment of the inspection system is further improved. In practical application, the system device for the inspection and detection of the rail-type inspection robot can be a server with large data processing capacity, a computer client and the like. Fig. 2 is a schematic view of an implementation flow of a track type inspection robot inspection detection method according to a third embodiment of the present invention, and as shown in fig. 2, the method further includes:
step S301: acquiring user information, and judging whether the authority of the user information is matched with the authority of the target image to be detected;
the method for acquiring the user information can be that the user information is transmitted to the system device for the inspection and detection of the rail type inspection robot by swiping a card through an entrance guard; the method for acquiring the user information can also be used for transmitting the user information to the system device for the inspection and detection of the rail type inspection robot in a mode of inputting the job number by the user.
When the system for the inspection detection of the rail-type inspection robot acquires the user information, the authority of the user information is judged first, and then whether the authority of the user information is matched with the authority of the target image to be detected is judged.
Step S302 a: stopping sending alarm information when the authority of the user information is matched with the authority of the target image to be detected;
here, when the authority of the user information matches the authority of the target image to be detected, it indicates that the user is operating the target electric appliance cabinet to be detected, and thus, the alarm of the electric appliance cabinet to be detected is released.
Step S302 b: and when the authority of the user information is not matched with the authority of the target image to be detected, continuously sending first alarm information to the host.
Here, when the authority of the user information does not match the authority of the target image to be detected, it indicates that the user does not operate the target electrical cabinet to be detected, and thus, the detection of the electrical cabinet to be detected is maintained.
In the inspection method provided by the embodiment of the invention, whether the authority of the user information is matched with the authority of the target image to be detected is judged by acquiring the user information; stopping sending alarm information when the authority of the user information is matched with the authority of the target image to be detected; and when the authority of the user information is not matched with the authority of the target image to be detected, continuously sending first alarm information to the host. Thereby through the service behavior of equipment in the detection judgement factory building to user's permission of use, avoid the user when the maintenance service equipment, carry out wrong judgement to the service behavior of equipment, accuracy when having improved to patrol and examine.
Example four
In the inspection of a factory building, not only the inspection system of the rail type inspection robot is required to inspect the closing condition of the cabinet door of each electric cabinet, but also the inspection system of the rail type inspection robot is required to have other use functions. In practical application, the system device for the inspection and detection of the rail-type inspection robot can be a server with large data processing capacity, a computer client and the like. Fig. 3 is a schematic view of an implementation flow of the inspection detection method for the rail-type inspection robot according to the fourth embodiment of the present invention, and as shown in fig. 3, the method further includes:
step S401: extracting a second target image acquired at a second shooting point;
here, the second target image may be the same as the first target image or may be different from the first target image; when the second target image is different from the first target image, the second target image includes but is not limited to a dial image of an instrument, a two-dimensional code image on the device and an image of a display state of an indicator lamp on the device.
When extracting the second target image acquired at the second photographing point, it is possible to calculate the displacement time T from the first one of the position points to the second photographing point2Then, according to the interval time Z, a second target image at the second shooting point is determined. The second shooting point is an optimal position for shooting the second target image on the track, for example, when the second target image is an instrument dial image, the second target image acquired at the second shooting point position is a front view of an instrument dial. And the position of the second shooting point can be selected from the best shooting point of the second target image on the track when the track type inspection robot is arranged. The first and second imaging points may be located at the same or different positions.
Step S402: performing feature extraction on the second target image, and determining second features of the second target image;
here, feature extraction is performed on the second target image through an image feature detection algorithm to determine a second feature in the image, where there may be one or more second features. The second features include, but are not limited to, an image of a dial area of the instrument, an image of a two-dimensional code area on the device, and an image of an indicator light display status area on the device.
Step S403: sending the second characteristic to a host.
Here, the second feature is transmitted to the host, even if the image included in the second feature is transmitted to the user, the user can read the image conveniently.
EXAMPLE five
In the routine inspection process, when an electric cabinet runs, fire and other problems may occur due to overhigh running temperature of equipment, so that the closing condition of the cabinet door of each electric cabinet needs to be inspected at present, and the running temperature of the equipment also needs to be monitored. In practical application, the system device for the inspection and detection of the rail-type inspection robot can be a server with large data processing capacity, a computer client and the like. Fig. 4 is a schematic view of an implementation flow of the inspection detection method for the rail-type inspection robot according to the fifth embodiment of the present invention, and as shown in fig. 4, the method further includes:
step S501: extracting a first thermal imaging image of the first target image acquired at a third shooting point;
here, the third imaging point may be located at the same position on the track as the first imaging point and the second imaging point, or may be located at another position on the track, and the third imaging point may be an optimal position for imaging the first thermal image of the first target image.
When the first thermal imaging graph of the first target image is acquired at the third shooting point, a thermal imager is needed to shoot the first target image, then a video segment with the first target image is selected from the obtained video, and the first thermal imaging graph of the first target image at the third shooting point is determined and extracted.
Step S502: dividing the first thermal imaging graph based on different color area distributions on the thermal imaging graph to obtain each target area, and reading a temperature value of each target area;
here, different color regions in the thermal imaging map represent different temperature values, and therefore, target regions of different temperatures in the first thermal imaging map are divided by different color region distributions in the thermal imaging map, and the temperatures of the respective target regions are read.
Step S503: comparing the temperature value of each target area with the preset temperature threshold value respectively, and selecting a first high-temperature area with the temperature value higher than the preset temperature threshold value;
here, the read temperature is compared with the preset temperature threshold, where the preset temperature threshold refers to a critical value that each element in the electrical cabinet exceeds the operating temperature, and when the operating temperature of the element in the electrical cabinet exceeds the critical value, a fire may occur. Therefore, the first high-temperature area with the temperature value higher than the preset temperature threshold value selected after comparison is the potential fire part in the electrical cabinet.
Specifically, target areas of various temperatures on the first thermal imaging map can be divided, read and contrasted and selected by using Smart View software. The target areas of the various temperatures on the first thermal imaging map can also be divided, read and compared and selected by using a thermal imager of a feloflirone pro.
Step S504: and sending the temperature value of the first high-temperature area to the host.
And sending the temperature of the first high-temperature area to the host to remind a user that the electric appliance cabinet has abnormal operation.
Step S505: after the set time, extracting a second thermal imaging image of the first target image acquired at a third shooting point;
here, the set time is a certain time after the host computer acquires the temperature value of the first high temperature region, and may be, for example, 3 minutes. And shooting a second thermal imaging graph for acquiring the first target image at the third shooting point again after the set time, wherein the second thermal imaging graph is different from the first thermal imaging graph in shooting time.
Step S506: dividing the second thermal imaging graph based on different color area distributions on the thermal imaging graph to obtain each target area, and reading the temperature value of each target area;
step S507: obtaining a second high-temperature area of which the temperature value in the second thermal imaging graph is higher than the preset temperature threshold;
step S508: comparing the first high-temperature area with the second high-temperature area, and judging the overlapping part of the first high-temperature area and the second high-temperature area;
here, the result of comparing the first high temperature region and the second high temperature region includes:
the first high-temperature area and the second high-temperature area have intersection;
secondly, the second high-temperature areas of the first high-temperature areas are completely overlapped;
thirdly, the second high-temperature area of the first high-temperature area is not overlapped;
step S509: and sending the temperature value of the overlapped part to the host as third alarm information.
Here, when the operating temperature of the component in the electrical cabinet is still higher than the threshold value of the operating temperature, it may be determined that the component in the electrical cabinet is in an abnormal operating state, and a fire may be found due to burnout, and therefore, the overlapped portion of the first comparison result and the second comparison result in step S508 is sent to the host as the third alarm message. The third alarm information can be identical to the first alarm information and the second alarm information and alarm information in an abnormal state, and can also be different from the first alarm information and the second alarm information to serve as fire early warning information.
In the inspection method provided by the embodiment of the invention, a first thermal imaging image of the first target image acquired at a third shooting point is extracted; dividing the first thermal imaging graph based on different color area distributions on the thermal imaging graph to obtain each target area, and reading a temperature value of each target area; comparing the temperature value of each target area with the preset temperature threshold value respectively, and selecting a first high-temperature area with the temperature value higher than the preset temperature threshold value; sending the temperature value of the first high-temperature area to the host; after the set time, extracting a second thermal imaging image of the first target image acquired at a third shooting point; dividing the second thermal imaging graph based on different color area distributions on the thermal imaging graph to obtain each target area, and reading the temperature value of each target area; obtaining a second high-temperature area of which the temperature value in the second thermal imaging graph is higher than the preset temperature threshold; comparing the first high-temperature area with the second high-temperature area, and judging the overlapping part of the first high-temperature area and the second high-temperature area; and sending the temperature value of the overlapped part to the host as third alarm information. The abnormal working state that can know the component through detecting the component temperature in the regulator cubicle like this carries out the police dispatch newspaper, avoids the emergence of conflagration, simultaneously, can improve the precision of judging through detecting once more.
EXAMPLE six
When the inspection system is used, the inspection environment also needs to be shot and known in an all-around mode, so that the embodiment of the invention provides an inspection method on the basis of the first embodiment, and the inspection method is applied to the inspection detection system device of the rail type inspection robot. In practical application, the system device for the inspection and detection of the rail-type inspection robot can be a server with large data processing capacity, a computer client and the like. Fig. 5 is a schematic view of an implementation flow of a track type inspection robot inspection detection method according to a sixth embodiment of the present invention, and as shown in fig. 5, the method further includes:
step S601: after receiving a panoramic shooting instruction, carrying out video shooting along a preset track to obtain a panoramic video;
here, a panoramic video is acquired by shooting along a track. Wherein, the track can be along factory building annular setting.
Step S602: decomposing the panoramic video into a plurality of image frames;
here, the interval time M is determined with the start point at which the image capturing is started and the end point at which the image is returned to the start point again along the circular orbit, and one frame image is extracted at intervals of the interval time M to obtain a plurality of image frames. Wherein the interval time M may be the same as the interval time Z.
Step S603: dividing the preset track into a plurality of shooting sections, and extracting an image frame corresponding to each shooting section;
here, the preset track may be divided into a plurality of shooting segments according to different target images to be shot, for example, if there are 20 electrical cabinets in a factory building, the preset track may be divided into 20 shooting segments for each electrical cabinet according to a shooting subject when dividing.
The preset track is divided into a plurality of shooting sections, or the preset track is divided into a plurality of shooting sections in equal length according to the length of the preset track.
Step S604: selecting at least one picture in each shooting section as a spliced picture;
here, when the spliced picture in each shooting section is selected, a picture with the definition greater than a preset threshold in the shooting section needs to be selected. Preferably, the picture with the highest definition in the shooting segment is selected at least.
Step S605: and splicing the spliced pictures to generate at least one panoramic image.
Here, at the time of splicing, the conditions of the splicing are: and overlapping regions exist between the spliced pictures of the adjacent shooting sections.
Here, the stitched pictures may be stitched by using ptgui (panorama Tools Graphical User interface) software, and the panoramic image may be generated. The stitched pictures can also be stitched using OpenCV, and the panoramic image is generated.
In the inspection method provided by the embodiment of the invention, after receiving a panoramic shooting instruction, video shooting is carried out along a preset track to obtain a panoramic video; decomposing the panoramic video into a plurality of image frames; dividing the preset track into a plurality of shooting sections, and extracting an image frame corresponding to each shooting section; selecting at least one picture in each shooting section as a spliced picture; and splicing the spliced pictures to generate at least one panoramic image. Therefore, the condition in the inspection environment can be shot to produce a panoramic image for the user to observe while the inspection system shoots.
EXAMPLE seven
The embodiment of the invention provides a polling method, wherein a rail type polling robot is used in a factory building to know the running condition and the using state of equipment in the factory building, a video shooting video stream is transmitted to a big data platform through the 4th Generation mobile communication technology (4G) in cooperation with a network, the platform adopts big data real-time analysis and machine learning technology to process and analyze the video stream, the characteristics of the image such as color, shape and the like are acquired, the image is analyzed, and the purpose of remotely detecting the running state of field equipment is achieved. The inspection method is applied to an inspection method of a rail type inspection robot, fig. 6 is a schematic diagram of a composition structure of an inspection system, and as shown in fig. 6, the inspection system comprises: the system comprises a rail type inspection robot, a big data platform and a host.
The track type inspection robot is internally provided with an operating system, can install an Application program (App), opens the App to perform data interaction with a platform, and is connected with an operator 4G + network through a Wireless Fidelity (WiFi) and a mobile hotspot: when the track type inspection robot is in inspection, the App can transmit a shot field video to a big data platform through a Real Time Streaming Protocol (RTSP) Protocol.
And the big data platform is responsible for analyzing the RTSP video stream, extracting the characteristics of the image, comparing the extracted characteristics, and sending alarm information when the comparison result is abnormal. The hardware uses a distributed server cluster, the operating system uses Centos Linux6.5, and the big data platform further comprises: the device comprises an image acquisition module, an image identification module and an image coding and decoding module. Wherein:
and the image acquisition module is used for analyzing the video and acquiring a target image to be detected in the video.
In the actual implementation process, Apache Kafka can be used for image acquisition, a real-time computing model routes an acquired target image to a data warehouse tool (HIVE)/high fault-tolerant Distributed File System (HDFS) through Apache Storm or Apache Spark, and then an image analysis algorithm is called to detect an image frame of the target image.
In addition, the real-time image acquisition can be carried out by image sensors CCD and COMS or by adopting an FPGA (Field-Programmable Gate Array) chip.
And the image recognition module is used for analyzing the image characteristics of the target image and recognizing and comparing the image characteristics with the original characteristics.
And the image coding and decoding module is used for coding and decoding the image. The streaming media Server uses RTSP Server, and the video codec uses FFmpeg.
And the host is used for receiving the alarm information sent by the data platform and displaying the alarm information on a visual interface of the client for an auditor to check. The host is an intelligent terminal with a visual interface, and comprises but is not limited to a smart phone, a tablet computer, a notebook computer, a desktop computer and the like.
The detection method provided by the embodiment of the invention mainly comprises the following steps: the track type inspection robot provides a network environment as a mobile terminal, and the track type inspection robot comprises an intelligent mobile terminal, a mobile hotspot or an operator 4G + network and a rear-end big data platform which are connected with each other: secondly, the rail-mounted inspection robot collects an inspection video, encodes a video stream and transmits the video stream to a big data platform at the rear end of the inspection through an RTSP (real time streaming protocol); and thirdly, analyzing and processing the acquired video stream in real time by the big data platform, calling an image processing algorithm, performing characteristic marking and comparison on the image, and judging the condition of the acquired image. Fig. 7 is a schematic flow chart of an inspection method according to a seventh embodiment of the present invention, and as shown in fig. 7, the method includes:
step S701: and starting the track type inspection robot, entering inspection application, and starting to shoot inspection scene videos along the track.
Step S702: the track type inspection robot sends the video to the big data platform in a streaming media RTSP form in real time.
Step S703: and the big data platform analyzes the streaming media video through the image acquisition module and processes each frame of image in parallel.
Here, the Apache Kafka component may be used to extract an image in practical applications, and the extracted image may be used as a target image. Spark may be used as a computational model in performing the extraction of the target image.
Step S704: and the big data platform calls an image feature detection and target recognition algorithm through an image recognition module to detect the target image, extracts image features and then compares the features of the image with the initial features.
Here, the features of the image include, but are not limited to, the color, shape, and the like of the image. In a specific implementation, the code may use an Open Source Computer Vision Library (OpenCV) Open Source Component, and the target recognition algorithm may be, but is not limited to, Local Face Analysis (LFA), Principal Component Analysis (PCA) based on Principal Component Analysis (PCA), Neural network (Neural Networks), or fourier shape description.
Step S705: and if the comparison result is not matched, the big data platform sends alarm information to the user of the host.
The embodiment of the invention provides the inspection method and the inspection system, which can meet the accuracy requirement of the rail type inspection robot during inspection, and the real-time anomaly detection technology in the embodiment of the invention uses a big data real-time processing technology and can remotely detect anomalies in the field in real time. The real-time anomaly detection technology in the embodiment of the invention uses a big data real-time processing technology and calls a video analysis and image recognition algorithm, so that the anomalies in the field can be automatically and remotely detected, the auditing of a platform rear-end auditor is reduced, and the cost is reduced.
Example eight
An embodiment of the present invention provides an inspection system, fig. 8 is a schematic diagram of a configuration of an inspection system according to an eighth embodiment of the present invention, and as shown in fig. 8, the inspection system includes: a confirmation module 101, a first extraction unit 111, a second extraction unit 112, a first comparison unit 121, and a first sending unit 131, wherein:
a confirmation module 101 for determining a target image to be detected; a first extraction unit 111 for extracting a first target image acquired at a first photographing point; a second extraction unit 112, configured to perform feature extraction on the first target image, and determine a first feature of the first target image; a first comparing unit 121, configured to compare the first feature with an original feature of the first target image; a first sending unit 131, configured to send first alarm information to the host if the first characteristic is different from the original characteristic.
Further, the system further comprises: the receiving unit is used for receiving the video sent by the terminal; the analysis unit is used for analyzing the video to obtain each frame of image in the video; and the determining unit is used for determining a target image to be detected from all images contained in the video.
Further, the system further comprises: a third extraction unit configured to extract an enlarged image of the first target image acquired at the first photographing point; a fourth extraction unit, configured to perform feature extraction on the enlarged image, and determine an enlarged feature of the enlarged image; the second comparison unit is used for comparing the amplification characteristic with the original amplification characteristic of the first target image; and the second sending unit is used for sending first alarm information to the host if the amplification characteristic is different from the original amplification characteristic.
Further, the system further comprises: an acquisition unit configured to acquire user information; the judging unit is used for judging whether the authority of the user information is matched with the authority of the target image to be detected; a third transmitting unit configured to stop transmitting alarm information when the authority of the user information matches the authority of the target image to be detected; and when the authority of the user information is not matched with the authority of the target image to be detected, continuously sending first alarm information to the host.
Further, the system further comprises: a fourth extraction unit, configured to extract a first thermal imaging map of the first target image acquired at a third shooting point; the segmentation reading unit is used for segmenting the first thermal imaging graph based on different color area distributions on the thermal imaging graph to obtain each target area, and reading the temperature value of each target area; the comparison and selection unit is used for comparing the temperature value of each target area with the preset temperature threshold value respectively and selecting a first high-temperature area with the temperature value higher than the preset temperature threshold value; and the third sending unit is used for sending the temperature value of the first high-temperature area to the host.
Further, the system further comprises: the acquisition unit is used for carrying out video shooting along a preset track after receiving a panoramic shooting instruction to obtain a panoramic video; a decomposition unit for decomposing the panoramic video into a plurality of image frames; the fifth extraction unit is used for dividing the preset track into a plurality of shooting sections and extracting the image frame corresponding to each shooting section; the selecting unit is used for selecting at least one picture in each shooting section as a spliced picture; and the generating module is used for splicing the spliced pictures to generate at least one panoramic image.
Here, it should be noted that: the above description of the embodiment of the inspection system is similar to the above description of the embodiment of the method, and has similar beneficial effects to the embodiment of the method, and therefore, the description is omitted. For technical details that are not disclosed in the embodiment of the inspection system of the present invention, please refer to the description of the embodiment of the method of the present invention for understanding, and therefore, for brevity, will not be described again.
It should be appreciated that reference throughout this specification to "one embodiment" or "an embodiment" means that a particular feature, structure or characteristic described in connection with the embodiment is included in at least one embodiment of the present invention. Thus, the appearances of the phrases "in one embodiment" or "in an embodiment" in various places throughout this specification are not necessarily all referring to the same embodiment. Furthermore, the particular features, structures, or characteristics may be combined in any suitable manner in one or more embodiments. It should be understood that, in various embodiments of the present invention, the sequence numbers of the above-mentioned processes do not mean the execution sequence, and the execution sequence of each process should be determined by its function and inherent logic, and should not constitute any limitation on the implementation process of the embodiments of the present invention. The above-mentioned serial numbers of the embodiments of the present invention are merely for description and do not represent the merits of the embodiments.
It is noted that, in this document, the term "comprises" \ "comprising" or any other variation thereof is intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, an element defined by the phrase "comprising an … …" does not exclude the presence of other like elements in a process, method, article, or apparatus that comprises the element.
In the several embodiments provided in the present application, it should be understood that the disclosed system and method may be implemented in other ways. The above-described device embodiments are merely illustrative, for example, the division of the unit is only a logical functional division, and there may be other division ways in actual implementation, such as: multiple units or components may be combined, or may be integrated into another system, or some features may be omitted, or not implemented. Additionally, the various elements shown or discussed as being coupled or directly coupled or communicatively coupled to each other may be coupled or communicatively coupled indirectly, via some interface, device or element, whether electrically, mechanically, or otherwise.
The units described as separate parts may or may not be physically separate, and parts displayed as units may or may not be physical units: either in one location or distributed over multiple network elements: some or all of the units can be selected according to actual needs to achieve the purpose of the solution of the embodiment.
In addition, all the functional units in the embodiments of the present invention may be integrated into one processing unit, or each unit may be separately regarded as one unit, or two or more units may be integrated into one unit: the integrated unit can be realized in a form of hardware, or in a form of hardware plus a software functional unit.
Those of ordinary skill in the art will understand that: all or part of the steps for implementing the method embodiments may be accomplished by hardware related to program instructions, the program may be stored in a computer-readable storage medium, and when executed, the program performs the steps including the method embodiments: and the aforementioned storage medium includes: various media that can store program code, such as removable storage devices, read-only memories, magnetic or optical disks, etc.
Alternatively, the integrated unit of the present invention may be stored in a computer-readable storage medium if it is implemented in the form of a software functional module and sold or used as a separate product. Based on such understanding, the technical solutions of the embodiments of the present invention may be essentially implemented or a part contributing to the prior art may be embodied in the form of a software product, which is stored in a storage medium and includes several instructions for causing a computer device (which may be a personal computer, a server, or a network device) to execute all or part of the methods described in the embodiments of the present invention. And the aforementioned storage medium includes: a removable storage device, a ROM, a magnetic or optical disk, or other various media that can store program code.
The above description is only an embodiment of the present invention, but the scope of the present invention is not limited thereto, and any person skilled in the art can easily conceive of changes or substitutions within the technical scope of the present invention, and all such changes or substitutions are included in the scope of the present invention. Therefore, the protection scope of the present invention shall be subject to the protection scope of the appended claims.

Claims (11)

1. A method of routing inspection, the method comprising:
determining a target image to be detected;
extracting a first target image acquired at a first shooting point;
performing feature extraction on the first target image, and determining a first feature of the first target image;
comparing the first feature with an original feature of the first target image;
if the first characteristic is different from the original characteristic, sending first alarm information to a host;
the first shooting point is a position on the track for shooting the closing state of the cabinet door body, and is arranged between a first position point and a second position point on the track;
the method further comprises the following steps:
extracting a first thermal imaging image of the first target image acquired at a third shooting point;
dividing the first thermal imaging graph based on different color area distributions on the thermal imaging graph to obtain each target area, and reading a temperature value of each target area;
comparing the temperature value of each target area with a preset temperature threshold value respectively, and selecting a first high-temperature area with the temperature value higher than the preset temperature threshold value;
sending the temperature value of the first high-temperature area to the host;
after the set time, extracting a second thermal imaging image of the first target image acquired at a third shooting point;
dividing the second thermal imaging graph based on different color area distributions on the thermal imaging graph to obtain each target area, and reading the temperature value of each target area;
obtaining a second high-temperature area of which the temperature value in the second thermal imaging graph is higher than the preset temperature threshold;
and comparing the first high-temperature area with the second high-temperature area, judging the superposed part of the first high-temperature area and the second high-temperature area, and sending the temperature value of the superposed part as third alarm information to the host.
2. The inspection method according to claim 1, wherein the determining of the target image to be detected includes:
receiving a video sent by a terminal;
analyzing the video to obtain each frame of image in the video;
and determining a target image to be detected from all images contained in the video.
3. The inspection method according to claim 1, further comprising:
extracting an enlarged image of a first target image acquired at a first photographing point when the first feature is different from the original feature;
performing feature extraction on the amplified image, and determining the amplification feature of the amplified image;
comparing the amplified feature with an original amplified feature of the first target image;
and if the amplification characteristic is different from the original amplification characteristic, sending first alarm information to a host.
4. The inspection method according to claim 1, further comprising:
acquiring user information, and judging whether the authority of the user information is matched with the authority of a target image to be detected;
stopping sending alarm information when the authority of the user information is matched with the authority of the target image to be detected;
and when the authority of the user information is not matched with the authority of the target image to be detected, continuously sending first alarm information to the host.
5. The inspection method according to claim 1, further comprising:
extracting a second target image acquired at a second shooting point;
performing feature extraction on the second target image, and determining second features of the second target image;
sending the second characteristic to a host.
6. The inspection method according to claim 1, further comprising:
after receiving a panoramic shooting instruction, carrying out video shooting along a preset track to obtain a panoramic video;
decomposing the panoramic video into a plurality of image frames;
dividing the preset track into a plurality of shooting sections, and extracting an image frame corresponding to each shooting section;
selecting at least one picture in each shooting section as a spliced picture;
and splicing the spliced pictures to generate at least one panoramic image.
7. The utility model provides a system of patrolling and examining, patrols and examines robot based on rail mounted, its characterized in that, the system includes:
a confirmation module (101) for determining a target image to be detected;
a first extraction unit (111) for extracting a first target image acquired at a first photographing point;
a second extraction unit (112) for performing feature extraction on the first target image and determining a first feature of the first target image;
a first comparison unit (121) for comparing the first feature with an original feature of the first target image;
a first sending unit (131) for sending a first alarm message to a host if the first characteristic is different from the original characteristic;
the system further comprises:
a fourth extraction unit, configured to extract a first thermal imaging map of the first target image acquired at a third shooting point;
the segmentation reading unit is used for segmenting the first thermal imaging graph based on different color area distributions on the thermal imaging graph to obtain each target area, and reading the temperature value of each target area;
the comparison and selection unit is used for comparing the temperature value of each target area with a preset temperature threshold value respectively and selecting a first high-temperature area with the temperature value higher than the preset temperature threshold value;
a third sending unit, configured to send the temperature value of the first high temperature region to the host;
a sixth extraction unit, configured to extract, after a set time, a second thermal imaging map of the first target image acquired at a third shooting point;
the third comparison unit is used for segmenting the second thermal imaging image based on different color area distributions on the thermal imaging image to obtain each target area, and reading the temperature value of each target area; obtaining a second high-temperature area of which the temperature value in the second thermal imaging graph is higher than the preset temperature threshold; and comparing the first high-temperature area with the second high-temperature area, judging the superposed part of the first high-temperature area and the second high-temperature area, and sending the temperature value of the superposed part as third alarm information to the host.
8. The inspection system according to claim 7, further including:
the receiving unit is used for receiving the video sent by the terminal;
the analysis unit is used for analyzing the video to obtain each frame of image in the video;
and the determining unit is used for determining a target image to be detected from all images contained in the video.
9. The inspection system according to claim 7, further including:
a third extraction unit configured to extract an enlarged image of the first target image acquired at the first photographing point;
a fourth extraction unit, configured to perform feature extraction on the enlarged image, and determine an enlarged feature of the enlarged image;
the second comparison unit is used for comparing the amplification characteristic with the original amplification characteristic of the first target image;
and the second sending unit is used for sending first alarm information to the host if the amplification characteristic is different from the original amplification characteristic.
10. The inspection system according to claim 7, further including:
an acquisition unit configured to acquire user information;
the judging unit is used for judging whether the authority of the user information is matched with the authority of the target image to be detected;
a third transmitting unit configured to stop transmitting alarm information when the authority of the user information matches the authority of the target image to be detected; and when the authority of the user information is not matched with the authority of the target image to be detected, continuously sending first alarm information to the host.
11. The inspection system according to claim 7, further including:
the acquisition unit is used for carrying out video shooting along a preset track after receiving a panoramic shooting instruction to obtain a panoramic video;
a decomposition unit for decomposing the panoramic video into a plurality of image frames;
the fifth extraction unit is used for dividing the preset track into a plurality of shooting sections and extracting the image frame corresponding to each shooting section;
the selecting unit is used for selecting at least one picture in each shooting section as a spliced picture;
and the generating module is used for splicing the spliced pictures to generate at least one panoramic image.
CN201910463602.3A 2019-05-30 2019-05-30 Inspection method and system Active CN110516522B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201910463602.3A CN110516522B (en) 2019-05-30 2019-05-30 Inspection method and system

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201910463602.3A CN110516522B (en) 2019-05-30 2019-05-30 Inspection method and system

Publications (2)

Publication Number Publication Date
CN110516522A CN110516522A (en) 2019-11-29
CN110516522B true CN110516522B (en) 2020-11-27

Family

ID=68622807

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201910463602.3A Active CN110516522B (en) 2019-05-30 2019-05-30 Inspection method and system

Country Status (1)

Country Link
CN (1) CN110516522B (en)

Families Citing this family (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111126460B (en) * 2019-12-10 2024-04-02 福建省高速公路集团有限公司 Automatic pavement disease inspection method, medium, equipment and device based on artificial intelligence
CN111432334B (en) * 2020-03-31 2022-05-27 中通服创立信息科技有限责任公司 Following monitoring method and system for rail-mounted inspection robot
CN111563021B (en) * 2020-04-30 2023-09-22 中国工商银行股份有限公司 Positioning method, positioning device, electronic equipment and medium
CN113591511B (en) * 2020-04-30 2024-08-20 顺丰科技有限公司 Concrete state identification method and device, electronic equipment and storage medium
CN111604888B (en) * 2020-05-29 2021-09-14 珠海格力电器股份有限公司 Inspection robot control method, inspection system, storage medium and electronic device
CN112367478A (en) * 2020-09-09 2021-02-12 北京潞电电气设备有限公司 Tunnel robot panoramic image processing method and device
CN112562112A (en) * 2020-11-16 2021-03-26 深圳市长龙铁路电子工程有限公司 Automatic inspection method and system
CN113128473A (en) * 2021-05-17 2021-07-16 哈尔滨商业大学 Underground comprehensive pipe gallery-oriented inspection system, method, equipment and storage medium
CN113569650A (en) * 2021-06-29 2021-10-29 上海红檀智能科技有限公司 Unmanned aerial vehicle autonomous inspection positioning method based on electric power tower label identification

Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104299284A (en) * 2014-08-20 2015-01-21 深圳供电局有限公司 Indoor substation track inspection system, host and method for automatically positioning and capturing equipment image
CN106056693A (en) * 2016-06-07 2016-10-26 国网福建省电力有限公司 Online inspection method and system of thermal imaging picture data on basis of mobile terminal
CN107172361A (en) * 2017-07-12 2017-09-15 维沃移动通信有限公司 The method and mobile terminal of a kind of pan-shot
CN108073854A (en) * 2016-11-14 2018-05-25 中移(苏州)软件技术有限公司 A kind of detection method and device of scene inspection
CN108174111A (en) * 2018-04-19 2018-06-15 常州市盈能电气有限公司 Crusing robot target image grasping means
CN108416968A (en) * 2018-01-31 2018-08-17 国家能源投资集团有限责任公司 Fire alarm method and apparatus
CN109740507A (en) * 2018-12-29 2019-05-10 国网浙江省电力有限公司 A kind of method of transformer equipment abnormality sensing
CN109788255A (en) * 2019-01-30 2019-05-21 广州轨道交通建设监理有限公司 A kind of building site fire source monitoring system and building site fire source monitoring method

Family Cites Families (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106327461B (en) * 2015-06-16 2019-11-15 浙江大华技术股份有限公司 A kind of image processing method and device for monitoring
CN105852819B (en) * 2016-03-23 2018-04-20 深圳云天励飞技术有限公司 Temperature monitoring equipment

Patent Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104299284A (en) * 2014-08-20 2015-01-21 深圳供电局有限公司 Indoor substation track inspection system, host and method for automatically positioning and capturing equipment image
CN106056693A (en) * 2016-06-07 2016-10-26 国网福建省电力有限公司 Online inspection method and system of thermal imaging picture data on basis of mobile terminal
CN108073854A (en) * 2016-11-14 2018-05-25 中移(苏州)软件技术有限公司 A kind of detection method and device of scene inspection
CN107172361A (en) * 2017-07-12 2017-09-15 维沃移动通信有限公司 The method and mobile terminal of a kind of pan-shot
CN108416968A (en) * 2018-01-31 2018-08-17 国家能源投资集团有限责任公司 Fire alarm method and apparatus
CN108174111A (en) * 2018-04-19 2018-06-15 常州市盈能电气有限公司 Crusing robot target image grasping means
CN109740507A (en) * 2018-12-29 2019-05-10 国网浙江省电力有限公司 A kind of method of transformer equipment abnormality sensing
CN109788255A (en) * 2019-01-30 2019-05-21 广州轨道交通建设监理有限公司 A kind of building site fire source monitoring system and building site fire source monitoring method

Also Published As

Publication number Publication date
CN110516522A (en) 2019-11-29

Similar Documents

Publication Publication Date Title
CN110516522B (en) Inspection method and system
CN110225299B (en) Video monitoring method and device, computer equipment and storage medium
CN108319926A (en) A kind of the safety cap wearing detecting system and detection method of building-site
CN110633612B (en) Monitoring method and system for inspection robot
IL256885A (en) Apparatus and methods for facial recognition and video analytics to identify individuals in contextual video streams
KR102195706B1 (en) Method and Apparatus for Detecting Intruder
US20130216107A1 (en) Method of surveillance by face recognition
EP2840557B1 (en) Image processing system, server device, image pickup device and image evaluation method
CN105516653A (en) Security and protection monitoring system
US20080246622A1 (en) Analyzing smoke or other emissions with pattern recognition
CN109982037A (en) Intelligent patrol detection device
CN111416960B (en) Video monitoring system based on cloud service
KR102366544B1 (en) Vision-based Rainfall Information System and Methodology Using Deep Learning
CN112906441B (en) Image recognition system and method for exploration and maintenance in communication industry
KR20200059643A (en) ATM security system based on image analyses and the method thereof
CN112132048A (en) Community patrol analysis method and system based on computer vision
KR101499456B1 (en) Facility anomaly detection System and Method using image
CN115841730A (en) Video monitoring system and abnormal event detection method
CN209433517U (en) It is a kind of based on more flame images and the fire identification warning device for combining criterion
KR102233679B1 (en) Apparatus and method for detecting invader and fire for energy storage system
CN114067396A (en) Vision learning-based digital management system and method for live-in project field test
CN110895663A (en) Two-wheel vehicle identification method and device, electronic equipment and monitoring system
CN108073854A (en) A kind of detection method and device of scene inspection
CN116580514A (en) Intelligent security method, system, medium and electronic equipment based on Internet of things
CN116311034A (en) Robot inspection system based on contrast detection

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant