[go: up one dir, main page]
More Web Proxy on the site http://driver.im/

CN111382726B - Engineering operation detection method and related device - Google Patents

Engineering operation detection method and related device Download PDF

Info

Publication number
CN111382726B
CN111382726B CN202010251987.XA CN202010251987A CN111382726B CN 111382726 B CN111382726 B CN 111382726B CN 202010251987 A CN202010251987 A CN 202010251987A CN 111382726 B CN111382726 B CN 111382726B
Authority
CN
China
Prior art keywords
image
pixel
target
pixel value
preset
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202010251987.XA
Other languages
Chinese (zh)
Other versions
CN111382726A (en
Inventor
孙玉玮
马青山
陈宇
朱建宝
施烨
俞鑫春
邓伟超
叶超
郭伟
任馨怡
王枫
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Nantong Huayuan Technology Development Co ltd
Nantong Power Supply Co Of State Grid Jiangsu Electric Power Co
Zhejiang Dahua Technology Co Ltd
Original Assignee
Nantong Huayuan Technology Development Co ltd
Nantong Power Supply Co Of State Grid Jiangsu Electric Power Co
Zhejiang Dahua Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Nantong Huayuan Technology Development Co ltd, Nantong Power Supply Co Of State Grid Jiangsu Electric Power Co, Zhejiang Dahua Technology Co Ltd filed Critical Nantong Huayuan Technology Development Co ltd
Priority to CN202010251987.XA priority Critical patent/CN111382726B/en
Publication of CN111382726A publication Critical patent/CN111382726A/en
Application granted granted Critical
Publication of CN111382726B publication Critical patent/CN111382726B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/20Scenes; Scene-specific elements in augmented reality scenes
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/20Image preprocessing
    • G06V10/25Determination of region of interest [ROI] or a volume of interest [VOI]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V2201/00Indexing scheme relating to image or video recognition or understanding
    • G06V2201/07Target detection
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02PCLIMATE CHANGE MITIGATION TECHNOLOGIES IN THE PRODUCTION OR PROCESSING OF GOODS
    • Y02P90/00Enabling technologies with a potential contribution to greenhouse gas [GHG] emissions mitigation
    • Y02P90/30Computing systems specially adapted for manufacturing

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Multimedia (AREA)
  • Theoretical Computer Science (AREA)
  • Image Analysis (AREA)

Abstract

The application discloses an engineering operation detection method and a related device, wherein the engineering operation detection method comprises the following steps: acquiring an original image shot by an imaging device on a working site, wherein the original image comprises a preset detection area; performing target detection on the original image to obtain a target area corresponding to a target object in the original image, wherein the target object is used for realizing warning; and determining whether the operation field accords with the operation specification or not based on the position relation between the preset detection area and the target area. By means of the scheme, the engineering operation detection quality can be improved.

Description

Engineering operation detection method and related device
Technical Field
The present application relates to the field of image processing technologies, and in particular, to a method and an apparatus for detecting engineering operations.
Background
In engineering operation, in order to alert operators at an operation site, the operators are prevented from entering a dangerous area by mistake, so as to ensure operation safety, and warning objects such as warning slogans are usually arranged near the dangerous area of the operation site. Taking electric power maintenance as an example, because of the influence of factors such as incomplete five-prevention function of the switching device or energy dispersion of operators, dangerous phenomena such as the operator mistakenly enters a charged interval, mistakenly opens and closes a switch, and the like can occur. Therefore, in switching operations or service operations, red cloth curtains are commonly used to alert operators to make the operators clearly distinguish between a power outage service screen cabinet and an adjacent live non-service screen cabinet. Therefore, warning objects such as red cloth curtain, warning slogans and the like play an important role in ensuring the operation safety.
At present, whether the warning object is normally used is still determined by adopting a manual checking mode, so that the problem of low efficiency exists. In addition, due to factors such as human resources and energy of inspectors, the manual inspection is inevitably overlooked. Thus, the quality of the engineering work is reduced. In view of this, how to improve the quality of engineering work detection is a problem to be solved.
Disclosure of Invention
The application mainly solves the technical problem of providing an engineering operation detection method and a related device, which can improve the engineering operation detection quality.
In order to solve the above problems, a first aspect of the present application provides an engineering job detection method, including: acquiring an original image shot by an imaging device on a working site, wherein the original image comprises a preset detection area; performing target detection on the original image to obtain a target area corresponding to a target object in the original image, wherein the target object is used for realizing warning; and determining whether the operation field accords with the operation specification or not based on the position relation between the preset detection area and the target area.
In order to solve the above-mentioned problems, a second aspect of the present application provides an engineering job detection device, including memory processors coupled to each other, the processors being configured to execute program instructions stored in the memory, so as to implement the engineering job detection method in the first aspect.
In order to solve the above-described problems, a third aspect of the present application provides a storage device storing program instructions executable by a processor for implementing the engineering job detection method in the above-described first aspect.
According to the scheme, the original image shot on the operation site by the image pickup device is obtained, the original image contains the preset detection area, so that the original image is subjected to target detection, the target area corresponding to the target object for realizing warning in the original image is obtained, and whether the operation site accords with the operation standard or not is determined based on the position relation between the preset detection area and the target area, so that whether the operation site accords with the operation standard or not can be detected based on the original image shot on the operation site by the image pickup device, the operation site is not required to be checked manually, the detection efficiency can be improved, the probability of occurrence of omission is reduced, and the engineering operation detection quality can be improved.
Drawings
FIG. 1 is a flow chart of an embodiment of a method for detecting engineering operations according to the present application;
FIG. 2 is a flowchart of an embodiment of step S12 in FIG. 1;
FIG. 3 is a schematic diagram of an embodiment of a process for obtaining the first integral map of FIG. 2;
FIG. 4 is a schematic diagram of one embodiment of the morphological treatment of FIG. 2;
FIG. 5 is a flowchart of step S123 in FIG. 2;
FIG. 6 is a schematic diagram of a construction work detection device according to an embodiment of the present application;
FIG. 7 is a schematic diagram of a frame of another embodiment of the engineering function test apparatus of the present application;
FIG. 8 is a schematic diagram of a frame of an embodiment of a storage device of the present application.
Detailed Description
The following describes embodiments of the present application in detail with reference to the drawings.
In the following description, for purposes of explanation and not limitation, specific details are set forth such as the particular system architecture, interfaces, techniques, etc., in order to provide a thorough understanding of the present application.
The terms "system" and "network" are often used interchangeably herein. The term "and/or" is herein merely an association relationship describing an associated object, meaning that there may be three relationships, e.g., a and/or B, may represent: a exists alone, A and B exist together, and B exists alone. In addition, the character "/" herein generally indicates that the front and rear associated objects are an "or" relationship. Further, "a plurality" herein means two or more than two.
Referring to fig. 1, fig. 1 is a flow chart illustrating an embodiment of the engineering operation detection method according to the present application. Specifically, the method may include the steps of:
step S11: an original image shot by the camera device on a working site is acquired, wherein the original image comprises a preset detection area.
In the present embodiment, the image pickup device may include a monitoring camera such as a dome camera, a card reader, or the like, and the present embodiment is not particularly limited herein. In practical application, the operation site may have a plurality of areas to be detected, in which case, in order for the image pickup device to automatically photograph the plurality of areas to be detected in the operation site, pose parameters set by a user on the image pickup device may be obtained, so as to set a plurality of preset positions for the image pickup device, so that the image pickup device photographs the operation site at the plurality of preset positions, thereby covering the whole operation site. Taking electric power overhaul as an example, the camera device can be arranged at the northeast corner of the electric power machine room, and when in detection, the southeast region, the northwest region and the southwest region of the electric power machine room need to be shot, so that three preset positions can be arranged for the camera device, and the camera device can respectively shoot towards the southeast region, the southwest region and the northwest region of the electric power machine room in the engineering operation detection process, so that an operation site can be covered.
In this embodiment, the preset detection area is an area that needs to be detected by engineering operation and is preset by a user, specifically, the preset detection area may be set as a rectangular area, and when the preset detection area is set by the user, coordinates of the rectangular area in the image may be set. In a specific implementation scenario, when the image capturing device is provided with a plurality of preset bits, the preset detection area may also be provided with a plurality of preset bits, and corresponds to the preset bits one by one. Still take electric power overhaul as an example, when the image pickup device is respectively provided with the three preset positions, so that the image pickup device can respectively shoot towards the southeast area, the southwest area and the northwest area of the electric power machine room in the engineering operation detection process, the three preset positions can be respectively provided with the electrified areas, and therefore, whether the electric power overhaul field meets the operation specification can be determined based on the position relation between the target object obtained through subsequent detection and the preset electrified areas.
In addition, similar arrangements may be made in other engineering applications such as construction engineering, communication engineering, etc., and this embodiment is not illustrated here.
Step S12: and performing target detection on the original image to obtain a target area corresponding to the target object in the original image.
In this embodiment, the target object is used to realize warning, and taking electric power overhaul as an example, the target object may be a red cloth curtain. In other engineering operations, the target object may also be other warnings, for example, in building engineering, the target object may also be a warning line, etc.; in the communication engineering, the target object may also be a warning sign, etc., which is not exemplified here.
In this embodiment, the specific mode of target detection may be a detection mode based on a neural network. For example, training the preset neural network by using the training image marked with the target object to obtain a trained preset neural network, and detecting the original image by using the trained preset neural network to obtain a target area corresponding to the target object. Or, the specific mode of target detection may also be a detection method based on traditional image analysis, for example, the color features of each pixel point in the original image are analyzed to obtain pixel points similar to the color features of the target object, then the pixel points are subjected to noise reduction and other treatments, and finally the minimum circumscribed rectangle of the pixel points after the noise reduction and other treatments is used as the target area corresponding to the target object. The present embodiment is not particularly limited herein.
Step S13: and determining whether the operation field accords with the operation specification or not based on the position relation between the preset detection area and the target area.
In one implementation scenario, when a coincidence region exists between the preset detection region and the target region, the target object is considered to be arranged in the preset detection region, the operation site is determined to be in accordance with the operation specification, otherwise, the operation site is determined to be out of accordance with the operation specification.
In another implementation scenario, when the preset detection area completely includes the target area, the target object is set in the preset detection area, and the operation site is determined to be in accordance with the operation specification, otherwise, the operation site is determined to be not in accordance with the operation specification.
In yet another implementation scenario, when the intersection ratio (Intersection over Union, ioU) between the preset detection area and the target area is greater than a preset intersection ratio threshold (e.g., 0.5), the target object is set in the preset detection area, the job site is determined to be in accordance with the job specification, otherwise, the job site is determined to be out of accordance with the job specification.
In a specific implementation scenario, when it is determined that the job site does not meet the job specification, preset alert information may also be output. In addition, when it is determined that the job site meets the job specification, preset safety information may be output, or no information may be output. The preset alarm information and the preset security information may include, but are not limited to: text, sound, images, etc., the present embodiment is not exemplified here. In addition, when the original image is subjected to target detection and a target area corresponding to a target object is not detected, preset alarm information can be directly output, so that the warning operation site is not in accordance with the operation specification.
According to the scheme, the original image shot on the operation site by the image pickup device is obtained, the original image contains the preset detection area, so that the original image is subjected to target detection, the target area corresponding to the target object for realizing warning in the original image is obtained, and whether the operation site accords with the operation standard or not is determined based on the position relation between the preset detection area and the target area, so that whether the operation site accords with the operation standard or not can be detected based on the original image shot on the operation site by the image pickup device, the operation site is not required to be checked manually, the detection efficiency can be improved, the probability of occurrence of omission is reduced, and the engineering operation detection quality can be improved.
Referring to fig. 2, fig. 2 is a flowchart illustrating an embodiment of step S12 in fig. 1. The method specifically comprises the following steps:
step S121: and carrying out threshold segmentation on the original image by utilizing a preset threshold related to the color characteristics of the target object to obtain an image to be detected.
In a specific implementation scenario, taking electric power overhaul as an example, if the target object is a red cloth curtain, a preset threshold related to the color characteristics of the red cloth curtain may be adopted to perform threshold segmentation on the original image, so as to obtain an image to be detected. In other application scenarios, and so on, this embodiment is not illustrated here.
In addition, since the RGB (Red Green Blue) color space is greatly different from the human eye perception, in order to make the definition of the color distance conform to the human visual characteristics, the present embodiment can also map the color space of the original image to the HSV (Hue Saturation Value, hue/saturation/brightness) color space before performing the threshold segmentation. In one specific implementation scenario, when the color space of the original image is an RGB color space, the color space of the original image may be mapped to an HSV color space by:
in the above formula, (R, G, B) represents an R-channel pixel value, a G-channel pixel value, and a B-channel pixel value of a certain pixel point in the original image.
In addition, the (H, S, V) may be mapped to the interval of 0-255, which is not described herein. In a specific implementation scenario, the preset threshold may include: in other implementation scenarios, the values may be other values, and the embodiment is not limited in particular herein. By using the preset threshold value, whether each pixel point in the original image mapped to the HSV color space meets the following conditions can be judged in sequence: whether the H-channel pixel value is in a preset H-channel threshold interval, whether the S-channel pixel value is in a preset S-channel threshold interval, and whether the V-channel pixel value is in a preset V-channel threshold interval are determined, if yes, setting the pixel value of the corresponding pixel point in the image to be detected as a first pixel value (for example: 255), otherwise, setting the pixel value of the corresponding pixel point in the image to be detected as a second pixel value (for example: 0). Specifically, the pixel value of the pixel point related to the color feature of the target object in the image to be detected is a first pixel value, and the pixel value of the pixel point unrelated to the color feature of the target object is a second pixel value.
Step S122: and counting the sum of pixel values of all pixel points in the upper left area of each pixel point in the image to be detected, and taking the sum as the pixel value of the corresponding pixel point in the first integral image corresponding to the image to be detected.
Referring to fig. 3 in combination, fig. 3 is a schematic diagram illustrating an embodiment of the process for obtaining the first integral map in fig. 2. As shown in fig. 3, P 1 Is the image to be detected, P 1 (i, j) is the pixel value of the pixel point positioned in the ith row and the jth column of the image to be detected, Q 1 Is a first integral image, Q 1 (i, j) is a pixel value of a pixel located in the i-th row and j-th column of the first integral image. The first integral image can be obtained based on the image to be detected by:
step S123: and carrying out morphological processing on the image to be detected by using the preset structural elements and the first integral image to obtain a target area corresponding to the target object in the original image.
The preset structural element in this embodiment is a two-dimensional matrix, where the value of the matrix element is 1, and the size of the matrix element may be 15×15, or may be 13×13, or 11×11, or 9*9, which is not specifically limited herein.
In one implementation scenario, the morphological processing of the image to be processed may be directly performed using preset structural elements. For example, when the preset structural element is used for performing corrosion morphological processing on the image to be processed, atoms (i.e. center points) of the preset structural element can be used for traversing target pixel points with pixel values of first pixel values in the image to be processed, when the pixel values of other pixel points in the size area of the preset structural element are the same as the pixel values of the target pixel points, i.e. the pixel values of the other pixel points are all the first pixel values, the pixel values of the target pixel points are reserved, and otherwise, the pixel values of the target pixel points are set to be second pixel values. Referring to fig. 4 in combination, fig. 4 is a schematic diagram of an embodiment of the morphological processing in fig. 2, specifically, fig. 4 is a schematic diagram of an embodiment of the etching morphological processing using a preset structural element, as shown in fig. 4, where the size of the preset structural element is k×k, the size of the image to be detected is w×h, the image to be detected after the etching morphological processing is shown in fig. 4, where a hatched pixel point indicates a target pixel point with a pixel value of a first pixel value, as shown in fig. 4, and burrs in the image to be processed can be eliminated through the etching morphological processing. However, the algorithm complexity of performing the corrosive morphological processing on the image to be processed by directly adopting the preset structural elements is O (w×h×k×k), the complexity is higher, and the processing load and the resource cost are higher.
In another implementation scenario, in order to reduce the complexity of the algorithm, reduce the processing load and reduce the resource overhead, the first integral image may be used to perform morphological processing on the image to be processed. Specifically, referring to fig. 5 in combination, fig. 5 is a flowchart of an embodiment of step S123 in fig. 2, which may specifically include:
step S1231: and carrying out corrosion morphological processing on the image to be detected by using the preset structural elements and the first integral image to obtain a first processed image.
Specifically, a first target pixel point with a pixel value being a first pixel value in an image to be detected can be determined, a second target pixel point corresponding to the first target pixel point in a first integral image is determined, a third target pixel point in the first integral image is determined based on the second target pixel point and the size of a preset element structure, then the pixel value of the third target pixel point is utilized to calculate the sum of pixel values of all pixel points in the size range around the first target pixel point, if the sum of calculated pixel values and the size of the pixel points are the first pixel value, the first target pixel point is considered to be not a burr/noise point, the pixel value of the first target pixel point is kept to be the first pixel value, and if the sum of pixel values and the size of the pixel points are not the first pixel value, the pixel value of the first target pixel point can be considered to be the burr/noise point, and the pixel value of the first target pixel point is reset to be the second pixel value.
In a specific implementation scenario, please continue to refer to fig. 3, as shown in fig. 3, the image P to be detected 1 Wherein pixel (i, j) is the first target pixel, and correspondingly, the first integral image Q 1 Wherein the pixel point (i, j) is a second target pixel point corresponding to the first target pixel point, the thick line black frame region is a size (k) region of a preset structural element, and the image P to be detected is calculated 1 The sum of pixel values of pixel points in the size region of the preset structural element in the first integral image Q can be utilized 1 The calculation is performed by first determining the first integral image Q using the size of the preset structural element 1 The first integral image Q can be first of all 1 Pixel point in lower right corner of size region of preset structural element of (a)Determining a first third target pixel point with a pixel value ofRepresenting pixel dot +.>The sum of the pixel values of all the pixel points in the upper left corner area is determined, and the other three third target pixel points are determined> The sum of the pixel values of all pixels within the range of the size (k x k) of the perimeter of the first target pixel (i, j) can be expressed as:
after calculating the sum S of the pixel values of all the pixel points within the range of the size (k) of the surrounding of the first target pixel point (i, j), the quotient of the sum S of the pixel values and the size (k) can be further calculated (i.e. ) If the quotient is the first pixel value, the image P to be detected is described 1 The pixel values of all the pixel points within the size (k x k) range of the first target pixel point (i, j) are all first pixel values, and the first target pixel point (i, j) can be regarded as not being a burr/noise point, and can hold the image P to be detected at the moment 1 The pixel value of the first target pixel point (i, j) is unchanged, otherwise, the first target pixel point (i, j) is described as a burr/noise point, and the image P to be detected is obtained 1 The pixel value of the first target pixel point (i, j) is reset to the second pixel value.
By using the integral graph to perform corrosion morphology processing, the algorithm complexity can be reduced to O (w×h), and the algorithm complexity is remarkably reduced.
Step S1232: and counting the sum of pixel values of all pixel points in the upper left area of each pixel point in the first processing image, and taking the sum as the pixel value of the corresponding pixel point in the second integral image corresponding to the first processing image.
In one implementation scenario, after the image to be processed is subjected to the erosion morphological processing, the dilation morphological processing may be further performed, and similarly to the above steps, a sum of pixel values of all pixel points in an upper left area of each pixel point in the first processed image may be counted as a pixel value of a corresponding pixel point in the second integral image corresponding to the first processed image. Reference may be made to the above related steps in this embodiment, and details are not repeated here.
Step S1233: and performing expansion morphological processing on the first processing image by using the preset structural elements and the second integral image to obtain a second processing image.
Specifically, each pixel point in the first processed image may be sequentially used as a fourth target pixel point, a fifth target pixel point corresponding to the fourth target pixel point in the second integrated image is determined, a sixth target pixel point in the second integrated image is determined based on the size of the fifth target pixel point and a preset structural element, the pixel value of the sixth target pixel point is used to calculate the sum of pixel values of all pixel points in the size range around the fourth target pixel point, if the sum of pixel values and the size of the fourth target pixel point are the second pixel value, the pixel value of the fourth target pixel point is set as the second pixel value, and if the sum of pixel values and the size of the fourth target pixel point are not the second pixel value, the pixel value of the fourth target pixel point is set as the first pixel value. The specific manner of determining the sixth target pixel point in the second integral image based on the fifth target pixel point and the size of the preset structural element, and calculating the sum of the pixel values of all the pixel points within the size range of the periphery of the fourth target pixel point by using the pixel values of the sixth target pixel point may refer to step S1231 in this embodiment, which is not described herein again. Unlike the etching morphological processing, the purpose of the expansion morphological processing is to expand an edge and fill a hole, so as long as there is a pixel whose pixel value is a first pixel value within a size range of a preset structural element, i.e., as long as the quotient of the sum of pixel values and the size is not a second pixel value, the pixel value of the fourth target pixel is set to the first pixel value, otherwise, if there is a pixel whose pixel value is a first pixel value within a size range of a preset structural element, i.e., if the quotient of the sum of pixel values and the size is a second pixel value, the pixel value of the fourth target pixel is set to the second pixel value, by which means the edge can be remarkably expanded and the internal hole is filled.
Step S1234: and determining the pixel value of each pixel point in the second processed image as a target connected domain of the first pixel value.
And after the image to be processed is subjected to corrosion morphological processing and expansion morphological processing, obtaining a second processed image, and at the moment, if the pixel value of a certain pixel point in the second processed image and the pixel value of a neighborhood pixel point are both the first pixel value, dividing the pixel point and the neighborhood pixel point into the same connected domain, traversing all the pixel points in the second processed image, and obtaining at least one target connected domain.
Step S1235: and acquiring the minimum circumscribed rectangle of the target connected domain as a target area.
And taking the minimum circumscribed rectangle of the target connected domain as a target area.
Different from the foregoing embodiment, the method includes performing threshold segmentation on an original image by using a preset threshold related to a color feature of a target object to obtain an image to be detected, and counting a sum of pixel values of all pixel points in an upper left area of each pixel point in the image to be detected as a pixel value of a corresponding pixel point in a first integral image corresponding to the image to be detected, so that morphological processing is performed on the image to be detected by using a preset structural element and the first integral image to obtain a target area corresponding to the target object in the original image, algorithm complexity can be reduced, processing time can be shortened, speed of target detection is accelerated, real-time detection of whether engineering operation is standard or not is facilitated, and timely alarm is performed when the engineering operation is not standard.
Referring to fig. 6, fig. 6 is a schematic diagram illustrating a construction process detection apparatus 60 according to an embodiment of the application. The engineering operation detection device 60 comprises an acquisition module 61, a detection module 62 and a determination module 63, wherein the acquisition module 61 is used for acquiring an original image shot by an imaging device on an operation site, the original image comprises a preset detection area, the detection module 62 is used for carrying out target detection on the original image and acquiring a target area corresponding to a target object in the original image, the target object is used for realizing warning, and the determination module 63 is used for determining whether the operation site accords with an operation specification or not based on the position relation between the preset detection area and the target area.
According to the scheme, the original image shot on the operation site by the image pickup device is obtained, the original image contains the preset detection area, so that the original image is subjected to target detection, the target area corresponding to the target object for realizing warning in the original image is obtained, and whether the operation site accords with the operation standard or not is determined based on the position relation between the preset detection area and the target area, so that whether the operation site accords with the operation standard or not can be detected based on the original image shot on the operation site by the image pickup device, the operation site is not required to be checked manually, the detection efficiency can be improved, the probability of occurrence of omission is reduced, and the engineering operation detection quality can be improved.
In some embodiments, the detection module 62 includes a threshold segmentation sub-module, configured to perform threshold segmentation on an original image by using a preset threshold related to a color feature of a target object to obtain an image to be detected, and the detection module 62 further includes an integral statistics sub-module, configured to count a sum of pixel values of all pixel points in an upper left area of each pixel point in the image to be detected, as a pixel value of a corresponding pixel point in a first integral image corresponding to the image to be detected, and the detection module 62 further includes a morphological processing sub-module, configured to perform morphological processing on the image to be detected by using a preset structural element and the first integral image to obtain a target area corresponding to the target object in the original image.
Different from the foregoing embodiment, the method includes performing threshold segmentation on an original image by using a preset threshold related to a color feature of a target object to obtain an image to be detected, and counting a sum of pixel values of all pixel points in an upper left area of each pixel point in the image to be detected as a pixel value of a corresponding pixel point in a first integral image corresponding to the image to be detected, so that morphological processing is performed on the image to be detected by using a preset structural element and the first integral image to obtain a target area corresponding to the target object in the original image, algorithm complexity can be reduced, processing time can be shortened, speed of target detection is accelerated, real-time detection of whether engineering operation is standard or not is facilitated, and timely alarm is performed when the engineering operation is not standard.
In some embodiments, the pixel value of the pixel point related to the color feature of the target object in the image to be detected is a first pixel value, the pixel value of the pixel point unrelated to the color feature of the target object is a second pixel value, the morphological processing submodule includes a corrosion processing unit for performing corrosion morphological processing on the image to be detected by using a preset structural element and a first integral image to obtain a first processing image, the integral statistics submodule is further used for counting the sum of pixel values of all pixel points in an upper left area of each pixel point in the first processing image, the sum is used as the pixel value of the corresponding pixel point in a second integral image corresponding to the first processing image, the morphological processing submodule further includes an expansion processing unit for performing expansion morphological processing on the first processing image by using the preset structural element and the second integral image to obtain a second processing image, the morphological processing submodule further includes a connected domain determining unit for determining that the pixel value of each pixel point in the second processing image is the target connected domain of the first pixel value, and the morphological processing submodule further includes a target area determining unit for obtaining the minimum connected domain as the target connected domain.
In some embodiments, the corrosion processing unit is specifically configured to determine a first target pixel point whose pixel value is a first pixel value in the image to be detected, determine a second target pixel point corresponding to the first target pixel point in the first integral image, determine a third target pixel point in the first integral image based on the second target pixel point and a size of a preset element structure, calculate a sum of pixel values of all pixel points within a size range around the first target pixel point by using pixel values of the third target pixel point, if a quotient of the sum of pixel values and the size is the first pixel value, keep the pixel value of the first target pixel point as the first pixel value, and if a quotient of the sum of pixel values and the size is not the first pixel value, reset the pixel value of the first target pixel point as the second pixel value.
In some embodiments, the expansion processing unit is specifically configured to sequentially take each pixel point in the first processed image as a fourth target pixel point, determine a fifth target pixel point corresponding to the fourth target pixel point in the second integrated image, determine a sixth target pixel point in the second integrated image based on the fifth target pixel point and a size of a preset structural element, calculate a sum of pixel values of all pixel points in a size range around the fourth target pixel point by using a pixel value of the sixth target pixel point, and if a quotient of the sum of pixel values and the size is the second pixel value, set the pixel value of the fourth target pixel point as the second pixel value, and if a quotient of the sum of pixel values and the size is not the second pixel value, set the pixel value of the fourth target pixel point as the first pixel value.
In some embodiments, the detection module 62 further includes a color space mapping sub-module for mapping the color space of the original image to the HSV color space, the preset threshold including: the threshold segmentation submodule comprises a judging unit and a threshold segmentation submodule, wherein the judging unit is used for sequentially judging whether each pixel point in an original image mapped to the HSV color space meets the following conditions: the threshold segmentation submodule further comprises a pixel value setting unit, wherein the pixel value setting unit is used for setting the pixel value of a corresponding pixel point in the image to be detected as a first pixel value when the judging unit judges that the condition is met, and the pixel value setting unit is also used for setting the pixel value of the corresponding pixel point in the image to be detected as a second pixel value when the judging unit judges that the condition is not met.
In some embodiments, the determining module 63 includes a coincidence region determining submodule configured to determine whether a coincidence region exists between the preset detection region and the target region, and the determining module 63 further includes a job determining submodule configured to determine that the job site meets the job specification when the coincidence region determining submodule determines that the coincidence region exists, and the job determining submodule is further configured to determine that the job site does not meet the job specification when the coincidence region determining submodule determines that the coincidence region does not exist.
In some embodiments, the determining module 63 further includes an information output sub-module for outputting preset alarm information when the job determination sub-module determines that the job site does not meet the job specification, and for outputting preset safety information when the job determination sub-module determines that the job site meets the job specification.
In some embodiments, the engineering task inspection device 60 further includes a setting module configured to obtain pose parameters set by a user on the camera, where the pose parameters are used to control the camera to capture a photograph of the work site.
Referring to fig. 7, fig. 7 is a schematic diagram illustrating a construction operation detection device 70 according to an embodiment of the application. The engineering work detection device 70 may include a memory 71 and a processor 72 coupled to each other; the processor 72 is configured to execute program instructions stored in the memory 71 to implement steps of any of the above-described engineering job detection method embodiments.
In particular, the processor 72 is configured to control itself and the memory 71 to implement the steps of any of the engineering job detection method embodiments described above. The processor 72 may also be referred to as a CPU (Central Processing Unit ). The processor 72 may be an integrated circuit chip having signal processing capabilities. The processor 72 may also be a general purpose processor, a digital signal processor (Digital Signal Processor, DSP), an application specific integrated circuit (Application Specific Integrated Circuit, ASIC), a Field programmable gate array (Field-Programmable Gate Array, FPGA) or other programmable logic device, discrete gate or transistor logic device, discrete hardware components. A general purpose processor may be a microprocessor or the processor may be any conventional processor or the like. In addition, the processor 72 may be commonly implemented by a plurality of integrated circuit chips.
In this embodiment, the processor 72 is configured to obtain an original image captured by the image capturing device on the job site, where the original image includes a preset detection area, the processor 72 is further configured to perform target detection on the original image, obtain a target area corresponding to a target object in the original image, where the target object is configured to implement warning, and determine whether the job site meets a job specification based on a positional relationship between the preset detection area and the target area.
According to the scheme, the original image shot on the operation site by the image pickup device is obtained, the original image contains the preset detection area, so that the original image is subjected to target detection, the target area corresponding to the target object for realizing warning in the original image is obtained, and whether the operation site accords with the operation standard or not is determined based on the position relation between the preset detection area and the target area, so that whether the operation site accords with the operation standard or not can be detected based on the original image shot on the operation site by the image pickup device, the operation site is not required to be checked manually, the detection efficiency can be improved, the probability of occurrence of omission is reduced, and the engineering operation detection quality can be improved.
In some embodiments, the processor 72 is further configured to perform threshold segmentation on the original image by using a preset threshold related to a color feature of the target object to obtain an image to be detected, and the processor 72 is further configured to count a sum of pixel values of all pixels in an upper left area of each pixel in the image to be detected, as a pixel value of a corresponding pixel in a first integral image corresponding to the image to be detected, and perform morphological processing on the image to be detected by using a preset structural element and the first integral image to obtain a target area corresponding to the target object in the original image.
Different from the foregoing embodiment, the method includes performing threshold segmentation on an original image by using a preset threshold related to a color feature of a target object to obtain an image to be detected, and counting a sum of pixel values of all pixel points in an upper left area of each pixel point in the image to be detected as a pixel value of a corresponding pixel point in a first integral image corresponding to the image to be detected, so that morphological processing is performed on the image to be detected by using a preset structural element and the first integral image to obtain a target area corresponding to the target object in the original image, algorithm complexity can be reduced, processing time can be shortened, speed of target detection is accelerated, real-time detection of whether engineering operation is standard or not is facilitated, and timely alarm is performed when the engineering operation is not standard.
In some embodiments, the pixel value of the pixel point related to the color feature of the target object in the image to be detected is a first pixel value, the pixel value of the pixel point unrelated to the color feature of the target object is a second pixel value, the processor 72 is further configured to perform the corrosion morphological processing on the image to be detected by using the preset structural element and the first integral image to obtain a first processed image, the processor 72 is further configured to count the sum of the pixel values of all the pixel points in the upper left area of each pixel point in the first processed image, as the pixel value of the corresponding pixel point in the second integral image corresponding to the first processed image, the processor 72 is further configured to perform the expansion morphological processing on the first processed image by using the preset structural element and the second integral image to obtain a second processed image, the processor 72 is further configured to determine the pixel value of each pixel point in the second processed image as the target connected domain of the first pixel value, and the processor 72 is further configured to obtain the minimum circumscribed rectangle of the target connected domain as the target area.
In some embodiments, the processor 72 is further configured to determine a first target pixel having a first pixel value in the image to be detected, and determine a second target pixel corresponding to the first target pixel in the first integrated image, the processor 72 is further configured to determine a third target pixel in the first integrated image based on the second target pixel and a size of the preset element structure, the processor 72 is further configured to calculate a sum of pixel values of all pixels within a range of sizes around the first target pixel using the pixel value of the third target pixel, the processor 72 is further configured to maintain the pixel value of the first target pixel as the first pixel value when a quotient of the sum of pixel values and the size is the first pixel value, and the processor 72 is further configured to reset the pixel value of the first target pixel as the second pixel value when the quotient of the sum of pixel values and the size is not the first pixel value.
In some embodiments, the processor 72 is further configured to sequentially use each pixel in the first processed image as a fourth target pixel, determine a fifth target pixel corresponding to the fourth target pixel in the second integrated image, determine a sixth target pixel in the second integrated image based on the fifth target pixel and a size of a preset structural element, calculate a sum of pixel values of all pixels within a size range around the fourth target pixel using pixel values of the sixth target pixel, and set the pixel value of the fourth target pixel to the second pixel value when a quotient of the sum of pixel values and the size is the second pixel value, and set the pixel value of the fourth target pixel to the first pixel value when a quotient of the sum of pixel values and the size is not the second pixel value.
In some embodiments, the processor 72 is further configured to map the color space of the original image to an HSV color space, the preset threshold comprising: the processor 72 is further configured to sequentially determine whether each pixel point in the original image mapped to the HSV color space satisfies the following conditions: the H-channel pixel value is within a preset H-channel threshold interval, the S-channel pixel value is within a preset S-channel threshold interval, and the V-channel pixel value is within a preset V-channel threshold interval, and the processor 72 is further configured to set the pixel value of the corresponding pixel in the image to be detected to a first pixel value when the condition is satisfied, and the processor 72 is further configured to set the pixel value of the corresponding pixel in the image to be detected to a second pixel value when the condition is not satisfied.
In some embodiments, the processor 72 is further configured to determine whether a coincidence region exists between the preset detection region and the target region, the processor 72 is further configured to determine that the job site meets the job specification when the determination is yes, and the processor 72 is further configured to determine that the job site does not meet the job specification when the determination is no.
In some embodiments, the engineering job detection device 70 further includes a human-machine interaction circuit for outputting a preset alert message when the processor 72 determines that the job site does not meet the job specification. The man-machine interaction circuit is also configured to output preset safety information when the processor 72 determines that the job site meets the job specification.
In some embodiments, the processor 72 is further configured to control the man-machine interaction circuit to obtain pose parameters set by a user for the camera, where the pose parameters are used to control the camera to capture a photograph of a work site.
In some embodiments, the engineering work detection device 70 further includes an imaging device for capturing an original image of the work site.
Referring to fig. 8, fig. 8 is a schematic diagram illustrating a frame of a storage device 80 according to an embodiment of the application. The storage device 80 stores program instructions 81 that can be executed by the processor, and the program instructions 81 are used to implement the steps in the above-described embodiment of the engineering job detection method.
According to the scheme, whether the operation site accords with the operation standard or not can be detected based on the original image shot by the image pickup device on the operation site, and the operation site does not need to be checked manually, so that the detection efficiency can be improved, the probability of occurrence of omission is reduced, and the detection quality of engineering operation can be improved.
In the several embodiments provided in the present application, it should be understood that the disclosed method and apparatus may be implemented in other manners. For example, the apparatus embodiments described above are merely illustrative, e.g., the division of modules or units is merely a logical functional division, and there may be additional divisions when actually implemented, e.g., multiple units or components may be combined or integrated into another system, or some features may be omitted or not performed. Alternatively, the coupling or direct coupling or communication connection shown or discussed with each other may be an indirect coupling or communication connection via some interfaces, devices or units, which may be in electrical, mechanical, or other forms.
The units described as separate units may or may not be physically separate, and units shown as units may or may not be physical units, may be located in one place, or may be distributed over a plurality of network units. Some or all of the units may be selected according to actual needs to achieve the purpose of the embodiment.
In addition, each functional unit in the embodiments of the present application may be integrated in one processing unit, or each unit may exist alone physically, or two or more units may be integrated in one unit. The integrated units may be implemented in hardware or in software functional units.
The integrated units, if implemented in the form of software functional units and sold or used as stand-alone products, may be stored in a computer readable storage medium. Based on such understanding, the technical solution of the present application may be embodied in essence or a part contributing to the prior art or all or part of the technical solution in the form of a software product stored in a storage medium, including several instructions for causing a computer device (which may be a personal computer, a server, or a network device, etc.) or a processor (processor) to execute all or part of the steps of the methods of the embodiments of the present application. And the aforementioned storage medium includes: a U-disk, a removable hard disk, a Read-Only Memory (ROM), a random access Memory (RAM, random Access Memory), a magnetic disk, or an optical disk, or other various media capable of storing program codes.

Claims (11)

1. An engineering operation detection method is characterized by comprising the following steps:
acquiring an original image shot by an imaging device on a working site, wherein the original image comprises a preset detection area;
performing threshold segmentation on the original image by using a preset threshold related to the color characteristics of the target object to obtain an image to be detected; the target object is used for realizing warning;
counting the sum of pixel values of all pixel points in the upper left area of each pixel point in the image to be detected, and taking the sum as the pixel value of the corresponding pixel point in the first integral image corresponding to the image to be detected;
carrying out morphological processing on the image to be detected by using a preset structural element and the first integral image to obtain a target area corresponding to the target object in the original image;
and determining whether the operation site meets an operation specification or not based on the position relation between the preset detection area and the target area.
2. The engineering job detection method according to claim 1, wherein a pixel value of a pixel point related to a color feature of the target object in the image to be detected is a first pixel value, and a pixel value of a pixel point unrelated to the color feature of the target object is a second pixel value;
The step of performing morphological processing on the image to be detected by using a preset structural element and the first integral image to obtain a target area corresponding to the target object in the original image, wherein the step of obtaining the target area comprises the following steps:
carrying out corrosion morphological processing on the image to be detected by utilizing the preset structural elements and the first integral image to obtain a first processed image;
counting the sum of pixel values of all pixel points in the upper left area of each pixel point in the first processing image, and taking the sum as the pixel value of the corresponding pixel point in the second integral image corresponding to the first processing image;
performing expansion morphological processing on the first processed image by using the preset structural elements and the second integral image to obtain a second processed image;
determining the pixel value of each pixel point in the second processed image as a target connected domain of the first pixel value;
and acquiring the minimum circumscribed rectangle of the target connected domain as the target area.
3. The method for detecting engineering operations according to claim 2, wherein performing the corrosive morphological processing on the image to be detected using the preset structural element and the first integral image to obtain a first processed image includes:
Determining a first target pixel point with a pixel value of the first pixel value in the image to be detected, and determining a second target pixel point corresponding to the first target pixel point in the first integral image;
determining a third target pixel point in the first integral image based on the second target pixel point and the size of the preset structural element;
calculating the sum of pixel values of all pixel points in the size range around the first target pixel point by using the pixel value of the third target pixel point;
if the quotient of the sum of the pixel values and the size is the first pixel value, the pixel value of the first target pixel point is kept to be the first pixel value;
and if the quotient of the sum of the pixel values and the size is not the first pixel value, resetting the pixel value of the first target pixel point to be the second pixel value.
4. The engineering job inspection method of claim 2, wherein performing the dilation morphological processing on the first processed image using the preset structural element and the second integral image to obtain a second processed image comprises:
Sequentially taking each pixel point in the first processed image as a fourth target pixel point, and determining a fifth target pixel point corresponding to the fourth target pixel point in the second integral image;
determining a sixth target pixel point in the second integral image based on the fifth target pixel point and the size of the preset structural element;
calculating the sum of pixel values of all pixel points in the size range around the fourth target pixel point by using the pixel value of the sixth target pixel point;
setting the pixel value of the fourth target pixel point to be the second pixel value if the quotient of the sum of the pixel values and the size is the second pixel value;
and if the quotient of the sum of the pixel values and the size is not the second pixel value, setting the pixel value of the fourth target pixel point as the first pixel value.
5. The method for detecting engineering operations according to claim 1, wherein the method further comprises, before performing threshold segmentation on the original image by using a preset threshold related to the color feature of the target object to obtain an image to be detected:
Mapping a color space of the original image to an HSV color space;
the preset threshold value comprises the following steps: presetting an H channel threshold interval, an S channel threshold interval and a V channel threshold interval; the threshold segmentation is performed on the original image by using a preset threshold related to the color characteristic of the target object, and the obtaining of the image to be detected includes:
sequentially judging whether each pixel point in the original image mapped to the HSV color space meets the following conditions: the H-channel pixel value is in the preset H-channel threshold interval, the S-channel pixel value is in the preset S-channel threshold interval, and the V-channel pixel value is in the preset V-channel threshold interval;
if yes, setting the pixel value of the corresponding pixel point in the image to be detected as a first pixel value;
if not, setting the pixel value of the corresponding pixel point in the image to be detected as a second pixel value.
6. The engineering job detection method according to claim 1, wherein the determining whether the job site meets a job specification based on the positional relationship between the preset detection area and the target area comprises:
judging whether a coincidence region exists between the preset detection region and the target region;
If yes, determining that the operation site meets the operation specification;
if not, determining that the operation field does not accord with the operation specification.
7. The engineering job detection method of claim 6, wherein after the determining that the job site is not in compliance with a job specification, the method further comprises:
outputting preset alarm information;
and/or, after the determining that the job site meets the job specification, the method further comprises:
outputting preset safety information or not outputting any information.
8. The engineering work detection method according to claim 1, wherein before the original image of the work site taken by the image pickup device is obtained, the method includes:
and acquiring pose parameters set by a user on the image pickup device, wherein the pose parameters are used for controlling the image pickup device to shoot the operation site.
9. An engineering job inspection apparatus comprising memory processors coupled to each other, the processors configured to execute program instructions stored in the memory to implement the engineering job inspection method of any one of claims 1 to 8.
10. The construction work detection device according to claim 9, further comprising an imaging device for capturing an original image of the work site.
11. A storage device storing program instructions executable by a processor for implementing the engineering job detection method according to any one of claims 1 to 8.
CN202010251987.XA 2020-04-01 2020-04-01 Engineering operation detection method and related device Active CN111382726B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010251987.XA CN111382726B (en) 2020-04-01 2020-04-01 Engineering operation detection method and related device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010251987.XA CN111382726B (en) 2020-04-01 2020-04-01 Engineering operation detection method and related device

Publications (2)

Publication Number Publication Date
CN111382726A CN111382726A (en) 2020-07-07
CN111382726B true CN111382726B (en) 2023-09-01

Family

ID=71217739

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010251987.XA Active CN111382726B (en) 2020-04-01 2020-04-01 Engineering operation detection method and related device

Country Status (1)

Country Link
CN (1) CN111382726B (en)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113239896A (en) * 2021-06-15 2021-08-10 创优数字科技(广东)有限公司 Image detection method and device

Citations (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2016225720A (en) * 2015-05-28 2016-12-28 住友電気工業株式会社 Monitoring device, monitoring method, and monitoring program
CN106446926A (en) * 2016-07-12 2017-02-22 重庆大学 Transformer station worker helmet wear detection method based on video analysis
CN107895138A (en) * 2017-10-13 2018-04-10 西安艾润物联网技术服务有限责任公司 Spatial obstacle object detecting method, device and computer-readable recording medium
CN108345819A (en) * 2017-01-23 2018-07-31 杭州海康威视数字技术股份有限公司 A kind of method and apparatus sending warning message
CN109034124A (en) * 2018-08-30 2018-12-18 成都考拉悠然科技有限公司 A kind of intelligent control method and system
CN109035629A (en) * 2018-07-09 2018-12-18 深圳码隆科技有限公司 A kind of shopping settlement method and device based on open automatic vending machine
CN110472623A (en) * 2019-06-29 2019-11-19 华为技术有限公司 Image detecting method, equipment and system
WO2020006907A1 (en) * 2018-07-05 2020-01-09 平安科技(深圳)有限公司 Photographing control method, terminal, and computer readable storage medium
CN110807393A (en) * 2019-10-25 2020-02-18 深圳市商汤科技有限公司 Early warning method and device based on video analysis, electronic equipment and storage medium
CN110889403A (en) * 2019-11-05 2020-03-17 浙江大华技术股份有限公司 Text detection method and related device

Family Cites Families (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP4350725B2 (en) * 2005-08-05 2009-10-21 キヤノン株式会社 Image processing method, image processing apparatus, and program for causing computer to execute image processing method
CN108319953B (en) * 2017-07-27 2019-07-16 腾讯科技(深圳)有限公司 Occlusion detection method and device, electronic equipment and the storage medium of target object

Patent Citations (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2016225720A (en) * 2015-05-28 2016-12-28 住友電気工業株式会社 Monitoring device, monitoring method, and monitoring program
CN106446926A (en) * 2016-07-12 2017-02-22 重庆大学 Transformer station worker helmet wear detection method based on video analysis
CN108345819A (en) * 2017-01-23 2018-07-31 杭州海康威视数字技术股份有限公司 A kind of method and apparatus sending warning message
CN107895138A (en) * 2017-10-13 2018-04-10 西安艾润物联网技术服务有限责任公司 Spatial obstacle object detecting method, device and computer-readable recording medium
WO2020006907A1 (en) * 2018-07-05 2020-01-09 平安科技(深圳)有限公司 Photographing control method, terminal, and computer readable storage medium
CN109035629A (en) * 2018-07-09 2018-12-18 深圳码隆科技有限公司 A kind of shopping settlement method and device based on open automatic vending machine
CN109034124A (en) * 2018-08-30 2018-12-18 成都考拉悠然科技有限公司 A kind of intelligent control method and system
CN110472623A (en) * 2019-06-29 2019-11-19 华为技术有限公司 Image detecting method, equipment and system
CN110807393A (en) * 2019-10-25 2020-02-18 深圳市商汤科技有限公司 Early warning method and device based on video analysis, electronic equipment and storage medium
CN110889403A (en) * 2019-11-05 2020-03-17 浙江大华技术股份有限公司 Text detection method and related device

Also Published As

Publication number Publication date
CN111382726A (en) 2020-07-07

Similar Documents

Publication Publication Date Title
US8199165B2 (en) Methods and systems for object segmentation in digital images
EP3783374A1 (en) Method for detecting corona discharge employing image processing
WO2015070723A1 (en) Eye image processing method and apparatus
JP2023503749A (en) CAMERA LENS STATE DETECTION METHOD, DEVICE, DEVICE, AND STORAGE MEDIUM
CN111444555B (en) Temperature measurement information display method and device and terminal equipment
CN111368785B (en) Camera shielding judgment method, device, equipment and storage medium
CN112651953B (en) Picture similarity calculation method and device, computer equipment and storage medium
CN108389215B (en) Edge detection method and device, computer storage medium and terminal
CN111382726B (en) Engineering operation detection method and related device
JP3725784B2 (en) Apparatus and method for detecting moving object in color frame image sequence
CN109726613B (en) Method and device for detection
JP7429756B2 (en) Image processing method, device, electronic device, storage medium and computer program
CN112883762A (en) Living body detection method, device, system and storage medium
CN114677319A (en) Stem cell distribution determination method and device, electronic equipment and storage medium
CN111695374B (en) Segmentation method, system, medium and device for zebra stripes in monitoring view angles
CN116883661B (en) Fire operation detection method based on target identification and image processing
CN108805883B (en) Image segmentation method, image segmentation device and electronic equipment
CN111928944B (en) Laser ray detection method, device and system
US20240331283A1 (en) Method and image-processing device for detecting a reflection of an identified object in an image frame
EP4312187A1 (en) Image analysis method and apparatus, computer device, and readable storage medium
CN118038039A (en) Camera shielding detection method and device, electronic equipment and storage medium
Hakeem et al. A Real-time System for Fire Detection and Localization in Outdoors
CN116977630A (en) Image detection method, device, electronic equipment and computer readable storage medium
CN118521641A (en) Electronic fence monitoring method and device based on depth information
CN116883984A (en) License plate detection method, device, equipment and medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant