[go: up one dir, main page]
More Web Proxy on the site http://driver.im/

CN112115927B - Intelligent machine room equipment identification method and system based on deep learning - Google Patents

Intelligent machine room equipment identification method and system based on deep learning Download PDF

Info

Publication number
CN112115927B
CN112115927B CN202011296942.0A CN202011296942A CN112115927B CN 112115927 B CN112115927 B CN 112115927B CN 202011296942 A CN202011296942 A CN 202011296942A CN 112115927 B CN112115927 B CN 112115927B
Authority
CN
China
Prior art keywords
equipment
machine room
neural network
network model
processing data
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202011296942.0A
Other languages
Chinese (zh)
Other versions
CN112115927A (en
Inventor
陈飞
胡坤
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shanghai mengpa Intelligent Technology Co.,Ltd.
Original Assignee
Beijing Mengpa Xinchuang Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Mengpa Xinchuang Technology Co ltd filed Critical Beijing Mengpa Xinchuang Technology Co ltd
Priority to CN202011296942.0A priority Critical patent/CN112115927B/en
Publication of CN112115927A publication Critical patent/CN112115927A/en
Application granted granted Critical
Publication of CN112115927B publication Critical patent/CN112115927B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/10Terrestrial scenes
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/21Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
    • G06F18/214Generating training patterns; Bootstrap methods, e.g. bagging or boosting
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/70Determining position or orientation of objects or cameras
    • G06T7/73Determining position or orientation of objects or cameras using feature-based methods
    • G06T7/74Determining position or orientation of objects or cameras using feature-based methods involving reference images or patches
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20081Training; Learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20084Artificial neural networks [ANN]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20112Image segmentation details
    • G06T2207/20132Image cropping

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Artificial Intelligence (AREA)
  • General Engineering & Computer Science (AREA)
  • Evolutionary Computation (AREA)
  • Computing Systems (AREA)
  • Biomedical Technology (AREA)
  • General Health & Medical Sciences (AREA)
  • Computational Linguistics (AREA)
  • Biophysics (AREA)
  • Mathematical Physics (AREA)
  • Software Systems (AREA)
  • Molecular Biology (AREA)
  • Health & Medical Sciences (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Evolutionary Biology (AREA)
  • Multimedia (AREA)
  • Image Analysis (AREA)

Abstract

The invention relates to a machine room equipment intelligent identification method and system based on deep learning, wherein the identification method comprises the following steps: collecting working state information of equipment in a specified machine room; performing data processing on the working state information by using a first neural network model to obtain first processing data; processing the first processing data by using a second neural network model to obtain second processing data; comparing the second processing data with prestored reference data; and executing corresponding operation based on the comparison result. The invention can avoid the detection by adopting a manual analysis mode, thereby avoiding the situation of detection misjudgment or missed judgment, greatly improving the detection accuracy and simultaneously reducing the monitoring cost of a machine room.

Description

Intelligent machine room equipment identification method and system based on deep learning
Technical Field
The invention belongs to the field of equipment identification, and particularly relates to an intelligent machine room equipment identification method and system based on deep learning.
Background
The machine room inspection is an important system for guaranteeing the safe operation of the machine room. In view of the problems that the traditional manual inspection has large workload, is greatly influenced by subjective factors such as the experience of inspectors, is difficult to store manual records and the like, more and more intelligent inspection robots are practically applied in a machine room, the automatic equipment identification and fault alarm efficiency is effectively improved, the labor intensity of operation and maintenance personnel is reduced, and powerful technical support is provided for unattended operation of the machine room.
Although the existing inspection robot generates massive visible light images when inspecting in a machine room, the basis for monitoring and analyzing the appearance characteristics of key equipment of the machine room is provided; however, the current equipment fault detection mainly adopts a manual analysis mode, so that the workload is high, serious detection misjudgment or misjudgment conditions are easy to occur during fault detection, and the fault is difficult to be accurately and timely found. In addition, the number of devices in the machine room is various, and it is difficult to determine the types and positions of the devices and the corresponding relationship between the devices and the indicator lights on the pictures according to the pictures taken by the robot.
Disclosure of Invention
In view of the defects in the prior art, the present invention aims to provide an intelligent machine room equipment identification method and system based on deep learning, which can avoid detection by adopting a manual analysis manner, thereby avoiding the occurrence of detection misjudgment or missed judgment.
In a first aspect, the invention provides a machine room equipment intelligent identification method based on deep learning, which comprises the following steps:
step S101, collecting working state information of equipment in a specified machine room, wherein the working state information comprises the following steps:
acquiring coordinates of the specified machine room equipment based on a user instruction;
acquiring working state information of the specified machine room equipment based on the coordinates;
step S103, using a first neural network model to process the data of the working state information to obtain first processing data, wherein the first processing data comprises:
adopting a pre-trained equipment recognition neural network model to recognize the picture and position the equipment;
cutting off the equipment part in the picture based on the positioning information, and storing the equipment part as first processing data;
step S105, processing the first processing data by using a second neural network model to obtain second processing data, wherein the second processing data comprises the following steps:
recognizing a neural network model by adopting a pre-trained indicator lamp, and detecting an equipment indicator lamp based on first processing data;
step S107, comparing the second processing data with prestored reference data;
and step S109, executing corresponding operation based on the comparison result.
In one embodiment, the operating state information refers to a picture including an indicator light of the specified equipment room.
In one embodiment, the step S107 specifically includes:
and comparing the number of the indicating lamps corresponding to the equipment faults of the specified machine room, which are pre-stored in the database, with the number of the indicating lamps on the detected key equipment pictures.
In one embodiment, the number of indicator lights corresponding to the failure of the equipment in the specified machine room, which is pre-stored in the database, includes:
and establishing an equipment fault table corresponding to the database, and writing the equipment name and the number of the indicator lamps when the equipment fails according to the difference of the number of the indicator lamps when different equipment fails.
In one embodiment, the device-recognition neural network model training process includes:
constructing a first training sample set, wherein the first training sample set comprises a plurality of pictures marked with specified machine room equipment, and the pictures are acquired by a robot in the inspection process in advance;
and training the plurality of pictures in the first training sample set by adopting a yolo algorithm to obtain each parameter of the neural network model recognized by the appointed machine room equipment.
In one embodiment, the training process of the indicator light recognition neural network model specifically includes:
constructing a second training sample set, wherein the second training sample set comprises a plurality of second pictures of the marking equipment indicator lamps, and the second pictures are collected by the robot in advance in the inspection process;
and training a plurality of second pictures in the second training sample set by adopting a yolo algorithm to obtain each parameter of the equipment indicator light recognition neural network model.
In an embodiment, the step S109 specifically includes:
and if the detected equipment indicator light picture and a prestored indicator light picture display the same fault, sending out a corresponding alarm.
In a second aspect, the present invention further provides a machine room equipment intelligent identification system based on deep learning, which includes:
the acquisition module is used for acquiring the working state information of the equipment in the specified machine room;
the first data processing module is used for processing the data of the working state information by using a first neural network model to obtain first processing data;
the second data processing module is used for processing the first processing data by using a second neural network model to obtain second processing data;
the comparison module is used for comparing the second processing data with prestored reference data;
and the processing module is used for executing corresponding operation based on the comparison result.
In one embodiment, the acquisition module comprises:
the coordinate acquisition module is used for acquiring the coordinates of the specified machine room equipment based on a user instruction;
and the information acquisition module is used for acquiring the working state information of the specified machine room equipment based on the coordinates.
In one embodiment, the first data processing module comprises:
the recognition positioning module is used for recognizing the picture and positioning the equipment by adopting a pre-trained equipment recognition neural network model;
and the cutting module cuts the equipment part in the picture based on the positioning information and saves the equipment part as first processing data.
Compared with the prior art, the method and the device have the advantages that the machine room equipment is detected through the first neural network model and the second neural network model to judge whether the machine room equipment has faults or not, so that the detection efficiency of the machine room equipment can be improved, and the situations of detection misjudgment or missing judgment can be avoided.
Drawings
The above and other objects, features and advantages of exemplary embodiments of the present disclosure will become readily apparent from the following detailed description read in conjunction with the accompanying drawings. Several embodiments of the present disclosure are illustrated by way of example, and not by way of limitation, in the figures of the accompanying drawings and in which like reference numerals refer to similar or corresponding parts and in which:
FIG. 1 is a flowchart illustrating a machine room equipment intelligent identification method based on deep learning according to an embodiment of the present invention;
fig. 2 is a flowchart illustrating a method for intelligently identifying equipment in a computer room according to an embodiment of the present invention;
FIG. 3 is a flow diagram illustrating training pictures according to an embodiment of the present invention;
fig. 4 is a flowchart illustrating a method for intelligently identifying equipment in a machine room according to an embodiment of the present invention;
FIG. 5 is a schematic diagram illustrating a deep learning-based intelligent recognition system for equipment in a machine room according to an embodiment of the invention;
fig. 6 is a schematic diagram illustrating a machine room equipment smart identification system according to an embodiment of the present invention; and
fig. 7 is a schematic diagram showing an electronic apparatus according to an embodiment of the present invention.
Detailed Description
In order to make the objects, technical solutions and advantages of the present invention clearer, the present invention will be described in further detail with reference to the accompanying drawings, and it is apparent that the described embodiments are only a part of the embodiments of the present invention, not all of the embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
The terminology used in the embodiments of the invention is for the purpose of describing particular embodiments only and is not intended to be limiting of the invention. As used in the examples of the present invention and the appended claims, the singular forms "a", "an", and "the" are intended to include the plural forms as well, unless the context clearly indicates otherwise, and "a plurality" typically includes at least two.
It should be understood that although the terms first, second, third, etc. may be used to describe … in embodiments of the present invention, these … should not be limited to these terms. These terms are used only to distinguish …. For example, the first … can also be referred to as the second … and similarly the second … can also be referred to as the first … without departing from the scope of embodiments of the present invention.
Alternative embodiments of the present invention are described in detail below with reference to the accompanying drawings.
Example one
Referring to fig. 1, an embodiment of the present invention provides a machine room equipment intelligent identification method based on deep learning, including:
step S101, collecting working state information of specified machine room equipment, wherein the working state information preferably refers to a picture containing an indicator light of the specified machine room equipment;
step S103, carrying out data processing on the working state information by using a first neural network model to obtain first processing data;
step S105, processing the first processing data by using a second neural network model to obtain second processing data;
step S107, comparing the second processing data with prestored reference data;
and step S109, executing corresponding operation based on the comparison result.
Example two
On the basis of the first embodiment, the implementation may further include the following:
referring to fig. 2, when the working state information of the equipment in the designated machine room is collected, the working state information may be collected manually or by a robot. In an application scenario, taking a robot to perform acquisition as an example, the step S101 may specifically include:
acquiring coordinates of the specified machine room equipment based on a user instruction;
and acquiring the working state information (including the picture of the indicator light of the specified machine room equipment) of the specified machine room equipment based on the coordinates.
In order to ensure that when the picture is detected, only the specified machine room equipment is detected during detection, so that the effects of reducing detection errors and improving detection efficiency are achieved, and the collected picture can be preprocessed. In an application scenario, when the preprocessing is performed in step S103, the preprocessing specifically includes:
adopting a pre-trained equipment recognition neural network model to recognize the picture and position the equipment;
and cutting off the equipment part in the picture based on the positioning information, and storing the equipment part as first processing data.
Cutting off the device part in the picture based on the positioning information, which may specifically include:
dividing an image containing the key equipment of the machine room into S multiplied by S grids, and acquiring each grid image in the S multiplied by S grids;
predicting a plurality of bounding boxes in each grid, the bounding boxes reflecting their position, and a confidence score reflecting its accuracy, and if there is no target, the confidence score being zero, wherein each bounding box contains 5 values: x, y, w, h and confidence, respectively. (x, y) coordinates represent the center of the bounding box relative to the grid cell bounding box; w represents the width, h represents the height, which are predicted relative to the whole image, and confidence represents the intersection between the predicted frame and the actual bounding frame;
and outputting the type of the equipment and the coordinate position of the equipment in the picture, which are critical to the machine room, according to the confidence score and the predicted probability of each grid. And (5) intercepting and storing the equipment part in the picture by using an opencv computer vision library according to the obtained (x, y) coordinates and w, h.
To better understand the scheme of preprocessing in step S103, the training process of the device recognition neural network model trained in advance may include:
constructing a first training sample set, wherein the first training sample set comprises a plurality of pictures marked with specified machine room equipment, and the pictures are acquired by a robot in the inspection process in advance;
and training the plurality of pictures in the first training sample set by adopting a yolo algorithm to obtain each parameter of the neural network model recognized by the appointed machine room equipment.
Wherein, many pictures that have marked appointed computer lab equipment are selected according to the demand, if there are multiple key equipment in the computer lab and all need detect and report to the police, then every kind of equipment all need shoot the collection. In addition, the pre-collection is mainly for the needs of making the training sample set, the robot can be manually controlled to the position of the equipment, and the onboard camera is operated to take pictures in multiple angles to obtain pictures containing the appointed equipment in the machine room.
EXAMPLE III
On the basis of the second embodiment, the present embodiment may further include the following:
after preprocessing the collected working state information (picture) of the specified equipment room, identifying and detecting a result of the preprocessing, specifically, the step 105 may include:
and recognizing the neural network model by adopting a pre-trained indicator lamp, detecting the equipment indicator lamp based on the first processing data, and taking a detection result as second processing data.
In order to better understand the scheme of the identification detection in step S105, the training process of the pre-trained indicator light identification neural network model may include:
constructing a second training sample set, wherein the second training sample set comprises a plurality of second pictures of the marking equipment indicator lamps, and the second pictures are collected by the robot in advance in the inspection process;
and training a plurality of second pictures in the second training sample set by adopting a yolo algorithm to obtain each parameter of the equipment indicator light recognition neural network model.
The method is characterized in that the method is mainly characterized in that the method has a very high operation speed and can be used for a real-time system. YOLO differs from the traditional detection algorithm in that a sliding window is used to find the target, which directly uses a single convolutional neural network to predict multiple bounding boxes and class probabilities.
Referring to fig. 3, in an application scenario, training a picture by using the yolo algorithm may specifically include:
collecting a plurality of (for example, 3000) computer room key equipment pictures, and marking the computer room key equipment in the pictures in a manual mode;
establishing a folder, and putting the labeled files and pictures into the folder according to the training requirements;
downloading a pre-training weight file yolov3.weights in a YoLO official website, converting a marking file into a file in a YOLO format according to a conversion script, and dividing the file into a training set, a testing set and a verification set;
and executing the training script to start training, testing the generated model after the training is finished, and obtaining all parameters of the machine room key equipment recognition neural network model after the recognition accuracy is determined to meet the requirement.
Example four
On the basis of the third embodiment, the present embodiment may further include the following:
further, after the pre-processing result is identified and detected, it is necessary to determine whether the designated equipment room equipment has a fault according to the identification and detection result, specifically, the step S107 may include:
and comparing the number of the indicating lamps corresponding to the equipment faults of the specified machine room, which are pre-stored in the database, with the number of the indicating lamps on the detected key equipment pictures.
In addition, when determining whether the designated equipment room has a fault according to the recognition and detection result, the determination may be performed according to the determination criterion in step S109, where step S109 specifically includes:
and if the detected equipment indicator light picture and a prestored indicator light picture display the same fault, sending out a corresponding alarm.
EXAMPLE five
On the basis of the above embodiment, the present embodiment may include the following:
referring to fig. 4, an embodiment of the present invention provides a machine room equipment intelligent identification method based on deep learning, which may include the following steps:
the robot collects key equipment pictures according to a preset coordinate position in the machine room inspection process;
recognizing and positioning key equipment pictures in a machine room by adopting a pre-trained equipment recognition neural network model (a first neural network model);
cutting off and storing the key equipment part in the picture according to the positioning information;
identifying a neural network model (a second neural network model) by adopting a pre-trained indicator light, and detecting the equipment indicator light on the stored key equipment picture;
comparing the number of the indicating lamps corresponding to the key equipment faults stored in the database in advance with the number of the indicating lamps on the detected key equipment pictures;
and judging whether the equipment fails according to the comparison result.
In some application scenarios, the statistical process of the number of indicator lights corresponding to the key device faults pre-stored in the database includes: and establishing an equipment fault table corresponding to the database, and writing the equipment name and the number of the indicator lamps when the equipment fails according to the difference of the number of the indicator lamps when different equipment fails.
In addition, the key equipment corresponds to one model, and the indicator light corresponds to the other model, namely the equipment recognition neural network model and the indicator light recognition neural network model are different models. If the equipment type is too much in the computer lab, the pilot lamp size shape is too complicated, can increase the model quantity and come separately to correspond, has accomplished the detection to key equipment.
The embodiment of the invention is based on a deep learning YOLO target detection algorithm, a patrol robot is used for collecting the characteristics of key equipment in a machine room, then the position of the key equipment and the corresponding equipment name are detected, on the basis, the key equipment is detected by an indicator lamp, the model of the key equipment and the detection result of the indicator lamp corresponding to the model of the key equipment are finally generated, the information of the indicator lamp after the key equipment in a pre-stored database fails is compared with the detected information, and the alarm signal of the equipment failure is finally output.
EXAMPLE six
On the basis of the fifth embodiment, the present embodiment may include the following:
the present invention will be described in further detail in order to make the objects, technical solutions and advantages of the present invention more apparent.
The robot is according to preset coordinate position collection key equipment picture in patrolling and examining in the computer lab, specifically can include:
when the robot patrols and examines in the computer lab, there are some computer lab key equipment on some coordinate points of map, when the robot moved to these coordinate points, can shoot and save key equipment through the camera of control robot self-band.
The method adopts a pre-trained key equipment recognition neural network model to recognize and position key equipment pictures in a machine room, and specifically comprises the following steps:
inputting the saved image into the neural network model for identifying key equipment to obtain the types of the equipment and the position information on the image.
Cutting off and storing the key equipment part in the picture according to the positioning information, which may specifically include:
and cutting off and saving the device part on the image by using the image cutting function module according to the position information of the device on the image.
The method for detecting the equipment indicator lamp on the stored key equipment picture by adopting the pre-trained indicator lamp to identify the neural network model specifically comprises the following steps:
and inputting the cut pictures of the key equipment parts into an indicator lamp recognition neural network model to obtain the detected number of indicator lamps on the equipment.
Comparing the number of indicator lamps corresponding to the key device failure pre-stored in the database with the number of indicator lamps on the detected key device picture, specifically, the method may include:
different types of equipment rooms have different numbers of corresponding indicator lights when faults occur, so that the corresponding number information of the indicator lights when the faults of different equipment occur is stored in a database in advance.
Judging whether the equipment fails according to the comparison result specifically includes:
and if the detected number of the equipment indicator lamps is consistent with the number of the indicator lamps which are prestored in the database when the equipment fails, sending an equipment failure alarm signal.
EXAMPLE seven
On the basis of the above embodiment, the present embodiment may include the following:
referring to fig. 5, an embodiment of the present invention provides a machine room equipment intelligent identification system 500 based on deep learning, which includes:
the acquisition module 501 is used for acquiring the working state information of equipment in a specified machine room;
a first data processing module 502, which performs data processing on the working state information by using a first neural network model to obtain first processing data;
a second data processing module 503, configured to process the first processing data using a second neural network model to obtain second processing data;
a comparison module 504, configured to compare the second processing data with pre-stored reference data;
and a processing module 505 for performing corresponding operations based on the comparison result.
The pre-stored reference data is the corresponding indicator light information when different equipment is in fault, which is pre-established in the database.
Example eight
On the basis of the above embodiment, the present embodiment may include the following:
referring to fig. 6, an embodiment of the present invention provides a machine room equipment intelligent identification system 600 based on deep learning, including:
the acquiring module 601 is used for acquiring image characteristics of key equipment in a machine room according to a preset coordinate position by using the inspection robot;
the first identification module 602 is configured to identify a device picture acquired by the robot by using a pre-trained device identification neural network model, and output a device type and a coordinate position of the device in the picture;
the device picture cutting module 603 is used for cutting and storing the device part in the picture according to the coordinate position of the device in the picture;
a second identification module 604, configured to identify a neural network model by using a pre-trained indicator light, and identify an indicator light on a cut device picture;
the determining module 605 is configured to compare the identified indicator light of the equipment in the machine room with an indicator light of the equipment in the database when the equipment fails, and determine whether the equipment fails.
Example nine
On the basis of the above embodiment, the present embodiment may include the following:
the acquisition module in the embodiment of the present invention may include:
the coordinate acquisition module is used for acquiring the coordinates of the specified machine room equipment based on a user instruction;
and the information acquisition module is used for acquiring the working state information of the specified machine room equipment based on the coordinates.
Further, the first data processing module may include:
the recognition positioning module is used for recognizing the picture and positioning the equipment by adopting a pre-trained equipment recognition neural network model;
and the cutting module cuts the equipment part in the picture based on the positioning information and saves the equipment part as first processing data.
Example ten
Referring to fig. 7, the present embodiment further provides an electronic device 700, where the electronic device 700 includes: at least one processor 701; and a memory 702 communicatively coupled to the at least one processor 701; wherein,
the memory 702 stores instructions executable by the one processor 701 to cause the at least one processor 701 to perform method steps as described in embodiments above.
EXAMPLE eleven
The disclosed embodiments provide a non-volatile computer storage medium having stored thereon computer-executable instructions that may perform the method steps as described in the embodiments above.
It should be noted that the computer readable medium in the present disclosure can be a computer readable signal medium or a computer readable storage medium or any combination of the two. A computer readable storage medium may be, for example, but not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or any combination of the foregoing. More specific examples of the computer readable storage medium may include, but are not limited to: an electrical connection having one or more wires, a portable computer diskette, a hard disk, a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), an optical fiber, a portable compact disc read-only memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing. In the present disclosure, a computer readable storage medium may be any tangible medium that can contain, or store a program for use by or in connection with an instruction execution system, apparatus, or device. In contrast, in the present disclosure, a computer readable signal medium may comprise a propagated data signal with computer readable program code embodied therein, either in baseband or as part of a carrier wave. Such a propagated data signal may take many forms, including, but not limited to, electro-magnetic, optical, or any suitable combination thereof. A computer readable signal medium may also be any computer readable medium that is not a computer readable storage medium and that can communicate, propagate, or transport a program for use by or in connection with an instruction execution system, apparatus, or device. Program code embodied on a computer readable medium may be transmitted using any appropriate medium, including but not limited to: electrical wires, optical cables, RF (radio frequency), etc., or any suitable combination of the foregoing.
The computer readable medium may be embodied in the electronic device; or may exist separately without being assembled into the electronic device.
Computer program code for carrying out operations for aspects of the present disclosure may be written in any combination of one or more programming languages, including an object oriented programming language such as Java, Smalltalk, C + +, and conventional procedural programming languages, such as the "C" programming language or similar programming languages. The program code may execute entirely on the user's computer, partly on the user's computer, as a stand-alone software package, partly on the user's computer and partly on a remote computer or entirely on the remote computer or server. In the case of a remote computer, the remote computer may be connected to the user's computer through any type of network, including a local Area Network (AN) or a Wide Area Network (WAN), or the connection may be made to AN external computer (for example, through the internet using AN internet service provider).
The flowchart and block diagrams in the figures illustrate the architecture, functionality, and operation of possible implementations of systems, methods and computer program products according to various embodiments of the present disclosure. In this regard, each block in the flowchart or block diagrams may represent a module, segment, or portion of code, which comprises one or more executable instructions for implementing the specified logical function(s). It should also be noted that, in some alternative implementations, the functions noted in the block may occur out of the order noted in the figures. For example, two blocks shown in succession may, in fact, be executed substantially concurrently, or the blocks may sometimes be executed in the reverse order, depending upon the functionality involved. It will also be noted that each block of the block diagrams and/or flowchart illustration, and combinations of blocks in the block diagrams and/or flowchart illustration, can be implemented by special purpose hardware-based systems which perform the specified functions or acts, or combinations of special purpose hardware and computer instructions.
The units described in the embodiments of the present disclosure may be implemented by software or hardware. Where the name of an element does not in some cases constitute a limitation on the element itself.
The foregoing describes preferred embodiments of the present invention, and is intended to provide a clear and concise description of the spirit and scope of the invention, and not to limit the same, but to include all modifications, substitutions, and alterations falling within the spirit and scope of the invention as defined by the appended claims.

Claims (4)

1. A machine room equipment intelligent identification method based on deep learning comprises the following steps:
step S101, collecting working state information of equipment in a specified machine room, wherein the working state information comprises the following steps:
acquiring coordinates of the specified machine room equipment based on a user instruction;
acquiring working state information of the specified machine room equipment based on the coordinates, wherein the working state information refers to pictures containing indicator lamps of the specified machine room equipment;
step S103, using a first neural network model to process the data of the working state information to obtain first processing data, wherein the first processing data comprises:
adopting a pre-trained equipment recognition neural network model to recognize the picture and position the equipment;
cropping the device portion in the picture based on the positioning information, including:
dividing an image containing equipment in a machine room into grids of S multiplied by S, and acquiring each grid image in the grids of S multiplied by S;
multiple bounding boxes and confidence scores are predicted in each mesh, each bounding box containing 5 values: x, y, w, h and confidence, respectively, (x, y) coordinates representing the center of the bounding box relative to the grid cell bounding box; w represents the width, h represents the height, confidence is the confidence, and confidence represents the intersection between the prediction box and the bounding box;
outputting the machine room equipment type and the coordinate position of the equipment in the picture according to the confidence score and the prediction probability of each grid, and intercepting and storing the equipment part in the picture as first processing data by using an opencv computer vision library according to the obtained (x, y) coordinate and the obtained w, h;
step S105, processing the first processing data by using a second neural network model to obtain second processing data, wherein the second processing data comprises the following steps:
recognizing a neural network model by adopting a pre-trained indicator lamp, and detecting an equipment indicator lamp based on first processing data;
step S107, comparing the second processing data with pre-stored reference data, comprising:
comparing the number of indicating lamps corresponding to the equipment faults of the specified machine room, which are pre-stored in a database, with the number of indicating lamps on the detected key equipment pictures;
step S109, based on the comparison result, executing corresponding operations, including:
if the detected equipment indicator light picture and a prestored indicator light picture display the same fault, sending a corresponding alarm;
the number of the indicating lamps corresponding to the faults of the equipment in the specified machine room, which is pre-stored in the database, comprises the following steps:
and establishing an equipment fault table corresponding to the database, and writing the equipment name and the number of the indicator lamps when the equipment fails according to the difference of the number of the indicator lamps when different equipment fails.
2. The method of claim 1, wherein the device-recognition neural network model training process comprises:
constructing a first training sample set, wherein the first training sample set comprises a plurality of pictures marked with specified machine room equipment, and the pictures are acquired by a robot in the inspection process in advance;
and training the plurality of pictures in the first training sample set by adopting a yolo algorithm to obtain each parameter of the neural network model recognized by the appointed machine room equipment.
3. The method of claim 1, wherein the indicator light recognition neural network model training process specifically comprises:
constructing a second training sample set, wherein the second training sample set comprises a plurality of second pictures of the marking equipment indicator lamps, and the second pictures are collected by the robot in advance in the inspection process;
and training a plurality of second pictures in the second training sample set by adopting a yolo algorithm to obtain each parameter of the equipment indicator light recognition neural network model.
4. The utility model provides a computer lab equipment intelligent recognition system based on deep learning, it includes:
the collection module, it is used for gathering the operating condition information of appointed computer lab equipment, the collection module includes:
a coordinate acquisition module, which acquires coordinates of the specified machine room equipment based on a user instruction;
the information acquisition module is used for acquiring the working state information of the specified machine room equipment based on the coordinates, wherein the working state information refers to a picture containing an indicator light of the specified machine room equipment;
a first data processing module, configured to perform data processing on the operating state information by using a first neural network model to obtain first processing data, where the first data processing module includes:
-a recognition and localization module which recognizes the picture and localizes the device using a pre-trained device recognition neural network model;
-a cropping module cropping the device part in the picture based on the positioning information and saving as first elaboration data, comprising:
dividing an image containing equipment in a machine room into grids of S multiplied by S, and acquiring each grid image in the grids of S multiplied by S;
multiple bounding boxes and confidence scores are predicted in each mesh, each bounding box containing 5 values: x, y, w, h and confidence, respectively, (x, y) coordinates representing the center of the bounding box relative to the grid cell bounding box; w represents the width, h represents the height, confidence is the confidence, and confidence represents the intersection between the prediction box and the bounding box;
outputting the machine room equipment type and the coordinate position of the equipment in the picture according to the confidence score and the prediction probability of each grid, and intercepting and storing the equipment part in the picture as first processing data by using an opencv computer vision library according to the obtained (x, y) coordinate and the obtained w, h;
a second data processing module, which uses a second neural network model to process the first processing data to obtain second processing data, and comprises:
recognizing a neural network model by adopting a pre-trained indicator lamp, and detecting an equipment indicator lamp based on first processing data;
a comparison module for comparing the second processing data with pre-stored reference data, comprising:
comparing the number of indicating lamps corresponding to the equipment faults of the specified machine room, which are pre-stored in a database, with the number of indicating lamps on the detected key equipment pictures;
the number of the indicating lamps corresponding to the faults of the equipment in the specified machine room, which is pre-stored in the database, comprises the following steps:
establishing an equipment fault table corresponding to the database, and writing the equipment name and the number of indicator lamps when the equipment fails according to the difference of the number of indicator lamps when different equipment fails;
the processing module is used for executing corresponding operations based on the comparison result, and comprises:
and if the detected equipment indicator light picture and a prestored indicator light picture display the same fault, sending out a corresponding alarm.
CN202011296942.0A 2020-11-19 2020-11-19 Intelligent machine room equipment identification method and system based on deep learning Active CN112115927B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202011296942.0A CN112115927B (en) 2020-11-19 2020-11-19 Intelligent machine room equipment identification method and system based on deep learning

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202011296942.0A CN112115927B (en) 2020-11-19 2020-11-19 Intelligent machine room equipment identification method and system based on deep learning

Publications (2)

Publication Number Publication Date
CN112115927A CN112115927A (en) 2020-12-22
CN112115927B true CN112115927B (en) 2021-03-19

Family

ID=73794245

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202011296942.0A Active CN112115927B (en) 2020-11-19 2020-11-19 Intelligent machine room equipment identification method and system based on deep learning

Country Status (1)

Country Link
CN (1) CN112115927B (en)

Families Citing this family (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112287915B (en) * 2020-12-28 2021-04-16 北京蒙帕信创科技有限公司 Equipment fault warning method and system based on deep learning
CN113393523B (en) * 2021-06-04 2023-03-14 上海蓝色帛缔智能工程有限公司 Method and device for automatically monitoring computer room image and electronic equipment
CN113705606A (en) * 2021-07-21 2021-11-26 中盈优创资讯科技有限公司 Intelligent machine room equipment cross-dimension quality inspection method and device based on target inspection
CN113525183A (en) * 2021-07-29 2021-10-22 北京南凯自动化系统工程有限公司 Railway contact net barrier cleaning system and method
CN114240155A (en) * 2021-12-17 2022-03-25 中国工商银行股份有限公司 Method and device for evaluating health degree of equipment in machine room and computer equipment
CN116319501B (en) * 2023-05-25 2023-09-05 深圳市英创立电子有限公司 Network system for obtaining equipment operation parameters
CN117035747B (en) * 2023-10-09 2024-02-02 国网山东省电力公司博兴县供电公司 Multi-system fault diagnosis processing method, system, equipment and medium for machine room
CN117156108B (en) * 2023-10-31 2024-03-15 中海物业管理有限公司 Enhanced display system and method for machine room equipment monitoring picture

Family Cites Families (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6714977B1 (en) * 1999-10-27 2004-03-30 Netbotz, Inc. Method and system for monitoring computer networks and equipment
CN111080775A (en) * 2019-12-19 2020-04-28 深圳市原创科技有限公司 Server routing inspection method and system based on artificial intelligence
CN111626139B (en) * 2020-04-30 2023-09-05 杭州优云科技有限公司 Accurate detection method for fault information of IT equipment in machine room

Also Published As

Publication number Publication date
CN112115927A (en) 2020-12-22

Similar Documents

Publication Publication Date Title
CN112115927B (en) Intelligent machine room equipment identification method and system based on deep learning
CN116363125B (en) Deep learning-based battery module appearance defect detection method and system
CN112560816A (en) Equipment indicator lamp identification method and system based on YOLOv4
CN112100039B (en) Equipment fault alarm method and system
CN110186375A (en) Intelligent high-speed rail white body assemble welding feature detection device and detection method
CN115393566A (en) Fault identification and early warning method and device for power equipment, storage medium and equipment
CN116563969B (en) Equipment inspection method, device, equipment and storage medium
CN115965625B (en) Instrument detection device based on visual recognition and detection method thereof
CN115471487A (en) Insulator defect detection model construction and insulator defect detection method and device
CN110458794B (en) Quality detection method and device for accessories of rail train
CN113343998A (en) Reading monitoring system and method for electric power mechanical meter, computer equipment and application
CN116055521A (en) Inspection system and image recognition method for electric inspection robot
CN113850773A (en) Detection method, device, equipment and computer readable storage medium
CN117114420B (en) Image recognition-based industrial and trade safety accident risk management and control system and method
CN113778091A (en) Method for inspecting equipment of wind power plant booster station
CN112052824A (en) Gas pipeline specific object target detection alarm method, device and system based on YOLOv3 algorithm and storage medium
CN110826473A (en) Neural network-based automatic insulator image identification method
CN116363397A (en) Equipment fault checking method, device and inspection system
CN115187880A (en) Communication optical cable defect detection method and system based on image recognition and storage medium
CN113592864B (en) Transformer monitoring method, device, system and medium based on convolutional neural network
US20220343113A1 (en) Automatic model reconstruction method and automatic model reconstruction system for component recognition model
CN116311034A (en) Robot inspection system based on contrast detection
CN117893477B (en) Intelligent security check system based on AI image recognition
CN117408973B (en) Method, terminal and electronic equipment for checking state of pressing plate of relay protection device of transformer substation
CN117911938A (en) Fire early warning method in factory environment

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant
TR01 Transfer of patent right
TR01 Transfer of patent right

Effective date of registration: 20210812

Address after: 200120 unit 706, building 6, hongqiaohui, Lane 990, Shenchang Road, Minhang District, Shanghai

Patentee after: Shanghai mengpa Information Technology Co.,Ltd.

Address before: 1110, 1 / F, building a, 98 Guangqu Road, Chaoyang District, Beijing 100022

Patentee before: Beijing mengpa Xinchuang Technology Co.,Ltd.

CP03 Change of name, title or address
CP03 Change of name, title or address

Address after: 200137 room 108, block a, building 8, No. 1879, jiangxinsha Road, Pudong New Area, Shanghai

Patentee after: Shanghai mengpa Intelligent Technology Co.,Ltd.

Address before: 200120 unit 706, building 6, hongqiaohui, Lane 990, Shenchang Road, Minhang District, Shanghai

Patentee before: Shanghai mengpa Information Technology Co.,Ltd.