Disclosure of Invention
In view of the defects in the prior art, the present invention aims to provide an intelligent machine room equipment identification method and system based on deep learning, which can avoid detection by adopting a manual analysis manner, thereby avoiding the occurrence of detection misjudgment or missed judgment.
In a first aspect, the invention provides a machine room equipment intelligent identification method based on deep learning, which comprises the following steps:
step S101, collecting working state information of equipment in a specified machine room, wherein the working state information comprises the following steps:
acquiring coordinates of the specified machine room equipment based on a user instruction;
acquiring working state information of the specified machine room equipment based on the coordinates;
step S103, using a first neural network model to process the data of the working state information to obtain first processing data, wherein the first processing data comprises:
adopting a pre-trained equipment recognition neural network model to recognize the picture and position the equipment;
cutting off the equipment part in the picture based on the positioning information, and storing the equipment part as first processing data;
step S105, processing the first processing data by using a second neural network model to obtain second processing data, wherein the second processing data comprises the following steps:
recognizing a neural network model by adopting a pre-trained indicator lamp, and detecting an equipment indicator lamp based on first processing data;
step S107, comparing the second processing data with prestored reference data;
and step S109, executing corresponding operation based on the comparison result.
In one embodiment, the operating state information refers to a picture including an indicator light of the specified equipment room.
In one embodiment, the step S107 specifically includes:
and comparing the number of the indicating lamps corresponding to the equipment faults of the specified machine room, which are pre-stored in the database, with the number of the indicating lamps on the detected key equipment pictures.
In one embodiment, the number of indicator lights corresponding to the failure of the equipment in the specified machine room, which is pre-stored in the database, includes:
and establishing an equipment fault table corresponding to the database, and writing the equipment name and the number of the indicator lamps when the equipment fails according to the difference of the number of the indicator lamps when different equipment fails.
In one embodiment, the device-recognition neural network model training process includes:
constructing a first training sample set, wherein the first training sample set comprises a plurality of pictures marked with specified machine room equipment, and the pictures are acquired by a robot in the inspection process in advance;
and training the plurality of pictures in the first training sample set by adopting a yolo algorithm to obtain each parameter of the neural network model recognized by the appointed machine room equipment.
In one embodiment, the training process of the indicator light recognition neural network model specifically includes:
constructing a second training sample set, wherein the second training sample set comprises a plurality of second pictures of the marking equipment indicator lamps, and the second pictures are collected by the robot in advance in the inspection process;
and training a plurality of second pictures in the second training sample set by adopting a yolo algorithm to obtain each parameter of the equipment indicator light recognition neural network model.
In an embodiment, the step S109 specifically includes:
and if the detected equipment indicator light picture and a prestored indicator light picture display the same fault, sending out a corresponding alarm.
In a second aspect, the present invention further provides a machine room equipment intelligent identification system based on deep learning, which includes:
the acquisition module is used for acquiring the working state information of the equipment in the specified machine room;
the first data processing module is used for processing the data of the working state information by using a first neural network model to obtain first processing data;
the second data processing module is used for processing the first processing data by using a second neural network model to obtain second processing data;
the comparison module is used for comparing the second processing data with prestored reference data;
and the processing module is used for executing corresponding operation based on the comparison result.
In one embodiment, the acquisition module comprises:
the coordinate acquisition module is used for acquiring the coordinates of the specified machine room equipment based on a user instruction;
and the information acquisition module is used for acquiring the working state information of the specified machine room equipment based on the coordinates.
In one embodiment, the first data processing module comprises:
the recognition positioning module is used for recognizing the picture and positioning the equipment by adopting a pre-trained equipment recognition neural network model;
and the cutting module cuts the equipment part in the picture based on the positioning information and saves the equipment part as first processing data.
Compared with the prior art, the method and the device have the advantages that the machine room equipment is detected through the first neural network model and the second neural network model to judge whether the machine room equipment has faults or not, so that the detection efficiency of the machine room equipment can be improved, and the situations of detection misjudgment or missing judgment can be avoided.
Detailed Description
In order to make the objects, technical solutions and advantages of the present invention clearer, the present invention will be described in further detail with reference to the accompanying drawings, and it is apparent that the described embodiments are only a part of the embodiments of the present invention, not all of the embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
The terminology used in the embodiments of the invention is for the purpose of describing particular embodiments only and is not intended to be limiting of the invention. As used in the examples of the present invention and the appended claims, the singular forms "a", "an", and "the" are intended to include the plural forms as well, unless the context clearly indicates otherwise, and "a plurality" typically includes at least two.
It should be understood that although the terms first, second, third, etc. may be used to describe … in embodiments of the present invention, these … should not be limited to these terms. These terms are used only to distinguish …. For example, the first … can also be referred to as the second … and similarly the second … can also be referred to as the first … without departing from the scope of embodiments of the present invention.
Alternative embodiments of the present invention are described in detail below with reference to the accompanying drawings.
Example one
Referring to fig. 1, an embodiment of the present invention provides a machine room equipment intelligent identification method based on deep learning, including:
step S101, collecting working state information of specified machine room equipment, wherein the working state information preferably refers to a picture containing an indicator light of the specified machine room equipment;
step S103, carrying out data processing on the working state information by using a first neural network model to obtain first processing data;
step S105, processing the first processing data by using a second neural network model to obtain second processing data;
step S107, comparing the second processing data with prestored reference data;
and step S109, executing corresponding operation based on the comparison result.
Example two
On the basis of the first embodiment, the implementation may further include the following:
referring to fig. 2, when the working state information of the equipment in the designated machine room is collected, the working state information may be collected manually or by a robot. In an application scenario, taking a robot to perform acquisition as an example, the step S101 may specifically include:
acquiring coordinates of the specified machine room equipment based on a user instruction;
and acquiring the working state information (including the picture of the indicator light of the specified machine room equipment) of the specified machine room equipment based on the coordinates.
In order to ensure that when the picture is detected, only the specified machine room equipment is detected during detection, so that the effects of reducing detection errors and improving detection efficiency are achieved, and the collected picture can be preprocessed. In an application scenario, when the preprocessing is performed in step S103, the preprocessing specifically includes:
adopting a pre-trained equipment recognition neural network model to recognize the picture and position the equipment;
and cutting off the equipment part in the picture based on the positioning information, and storing the equipment part as first processing data.
Cutting off the device part in the picture based on the positioning information, which may specifically include:
dividing an image containing the key equipment of the machine room into S multiplied by S grids, and acquiring each grid image in the S multiplied by S grids;
predicting a plurality of bounding boxes in each grid, the bounding boxes reflecting their position, and a confidence score reflecting its accuracy, and if there is no target, the confidence score being zero, wherein each bounding box contains 5 values: x, y, w, h and confidence, respectively. (x, y) coordinates represent the center of the bounding box relative to the grid cell bounding box; w represents the width, h represents the height, which are predicted relative to the whole image, and confidence represents the intersection between the predicted frame and the actual bounding frame;
and outputting the type of the equipment and the coordinate position of the equipment in the picture, which are critical to the machine room, according to the confidence score and the predicted probability of each grid. And (5) intercepting and storing the equipment part in the picture by using an opencv computer vision library according to the obtained (x, y) coordinates and w, h.
To better understand the scheme of preprocessing in step S103, the training process of the device recognition neural network model trained in advance may include:
constructing a first training sample set, wherein the first training sample set comprises a plurality of pictures marked with specified machine room equipment, and the pictures are acquired by a robot in the inspection process in advance;
and training the plurality of pictures in the first training sample set by adopting a yolo algorithm to obtain each parameter of the neural network model recognized by the appointed machine room equipment.
Wherein, many pictures that have marked appointed computer lab equipment are selected according to the demand, if there are multiple key equipment in the computer lab and all need detect and report to the police, then every kind of equipment all need shoot the collection. In addition, the pre-collection is mainly for the needs of making the training sample set, the robot can be manually controlled to the position of the equipment, and the onboard camera is operated to take pictures in multiple angles to obtain pictures containing the appointed equipment in the machine room.
EXAMPLE III
On the basis of the second embodiment, the present embodiment may further include the following:
after preprocessing the collected working state information (picture) of the specified equipment room, identifying and detecting a result of the preprocessing, specifically, the step 105 may include:
and recognizing the neural network model by adopting a pre-trained indicator lamp, detecting the equipment indicator lamp based on the first processing data, and taking a detection result as second processing data.
In order to better understand the scheme of the identification detection in step S105, the training process of the pre-trained indicator light identification neural network model may include:
constructing a second training sample set, wherein the second training sample set comprises a plurality of second pictures of the marking equipment indicator lamps, and the second pictures are collected by the robot in advance in the inspection process;
and training a plurality of second pictures in the second training sample set by adopting a yolo algorithm to obtain each parameter of the equipment indicator light recognition neural network model.
The method is characterized in that the method is mainly characterized in that the method has a very high operation speed and can be used for a real-time system. YOLO differs from the traditional detection algorithm in that a sliding window is used to find the target, which directly uses a single convolutional neural network to predict multiple bounding boxes and class probabilities.
Referring to fig. 3, in an application scenario, training a picture by using the yolo algorithm may specifically include:
collecting a plurality of (for example, 3000) computer room key equipment pictures, and marking the computer room key equipment in the pictures in a manual mode;
establishing a folder, and putting the labeled files and pictures into the folder according to the training requirements;
downloading a pre-training weight file yolov3.weights in a YoLO official website, converting a marking file into a file in a YOLO format according to a conversion script, and dividing the file into a training set, a testing set and a verification set;
and executing the training script to start training, testing the generated model after the training is finished, and obtaining all parameters of the machine room key equipment recognition neural network model after the recognition accuracy is determined to meet the requirement.
Example four
On the basis of the third embodiment, the present embodiment may further include the following:
further, after the pre-processing result is identified and detected, it is necessary to determine whether the designated equipment room equipment has a fault according to the identification and detection result, specifically, the step S107 may include:
and comparing the number of the indicating lamps corresponding to the equipment faults of the specified machine room, which are pre-stored in the database, with the number of the indicating lamps on the detected key equipment pictures.
In addition, when determining whether the designated equipment room has a fault according to the recognition and detection result, the determination may be performed according to the determination criterion in step S109, where step S109 specifically includes:
and if the detected equipment indicator light picture and a prestored indicator light picture display the same fault, sending out a corresponding alarm.
EXAMPLE five
On the basis of the above embodiment, the present embodiment may include the following:
referring to fig. 4, an embodiment of the present invention provides a machine room equipment intelligent identification method based on deep learning, which may include the following steps:
the robot collects key equipment pictures according to a preset coordinate position in the machine room inspection process;
recognizing and positioning key equipment pictures in a machine room by adopting a pre-trained equipment recognition neural network model (a first neural network model);
cutting off and storing the key equipment part in the picture according to the positioning information;
identifying a neural network model (a second neural network model) by adopting a pre-trained indicator light, and detecting the equipment indicator light on the stored key equipment picture;
comparing the number of the indicating lamps corresponding to the key equipment faults stored in the database in advance with the number of the indicating lamps on the detected key equipment pictures;
and judging whether the equipment fails according to the comparison result.
In some application scenarios, the statistical process of the number of indicator lights corresponding to the key device faults pre-stored in the database includes: and establishing an equipment fault table corresponding to the database, and writing the equipment name and the number of the indicator lamps when the equipment fails according to the difference of the number of the indicator lamps when different equipment fails.
In addition, the key equipment corresponds to one model, and the indicator light corresponds to the other model, namely the equipment recognition neural network model and the indicator light recognition neural network model are different models. If the equipment type is too much in the computer lab, the pilot lamp size shape is too complicated, can increase the model quantity and come separately to correspond, has accomplished the detection to key equipment.
The embodiment of the invention is based on a deep learning YOLO target detection algorithm, a patrol robot is used for collecting the characteristics of key equipment in a machine room, then the position of the key equipment and the corresponding equipment name are detected, on the basis, the key equipment is detected by an indicator lamp, the model of the key equipment and the detection result of the indicator lamp corresponding to the model of the key equipment are finally generated, the information of the indicator lamp after the key equipment in a pre-stored database fails is compared with the detected information, and the alarm signal of the equipment failure is finally output.
EXAMPLE six
On the basis of the fifth embodiment, the present embodiment may include the following:
the present invention will be described in further detail in order to make the objects, technical solutions and advantages of the present invention more apparent.
The robot is according to preset coordinate position collection key equipment picture in patrolling and examining in the computer lab, specifically can include:
when the robot patrols and examines in the computer lab, there are some computer lab key equipment on some coordinate points of map, when the robot moved to these coordinate points, can shoot and save key equipment through the camera of control robot self-band.
The method adopts a pre-trained key equipment recognition neural network model to recognize and position key equipment pictures in a machine room, and specifically comprises the following steps:
inputting the saved image into the neural network model for identifying key equipment to obtain the types of the equipment and the position information on the image.
Cutting off and storing the key equipment part in the picture according to the positioning information, which may specifically include:
and cutting off and saving the device part on the image by using the image cutting function module according to the position information of the device on the image.
The method for detecting the equipment indicator lamp on the stored key equipment picture by adopting the pre-trained indicator lamp to identify the neural network model specifically comprises the following steps:
and inputting the cut pictures of the key equipment parts into an indicator lamp recognition neural network model to obtain the detected number of indicator lamps on the equipment.
Comparing the number of indicator lamps corresponding to the key device failure pre-stored in the database with the number of indicator lamps on the detected key device picture, specifically, the method may include:
different types of equipment rooms have different numbers of corresponding indicator lights when faults occur, so that the corresponding number information of the indicator lights when the faults of different equipment occur is stored in a database in advance.
Judging whether the equipment fails according to the comparison result specifically includes:
and if the detected number of the equipment indicator lamps is consistent with the number of the indicator lamps which are prestored in the database when the equipment fails, sending an equipment failure alarm signal.
EXAMPLE seven
On the basis of the above embodiment, the present embodiment may include the following:
referring to fig. 5, an embodiment of the present invention provides a machine room equipment intelligent identification system 500 based on deep learning, which includes:
the acquisition module 501 is used for acquiring the working state information of equipment in a specified machine room;
a first data processing module 502, which performs data processing on the working state information by using a first neural network model to obtain first processing data;
a second data processing module 503, configured to process the first processing data using a second neural network model to obtain second processing data;
a comparison module 504, configured to compare the second processing data with pre-stored reference data;
and a processing module 505 for performing corresponding operations based on the comparison result.
The pre-stored reference data is the corresponding indicator light information when different equipment is in fault, which is pre-established in the database.
Example eight
On the basis of the above embodiment, the present embodiment may include the following:
referring to fig. 6, an embodiment of the present invention provides a machine room equipment intelligent identification system 600 based on deep learning, including:
the acquiring module 601 is used for acquiring image characteristics of key equipment in a machine room according to a preset coordinate position by using the inspection robot;
the first identification module 602 is configured to identify a device picture acquired by the robot by using a pre-trained device identification neural network model, and output a device type and a coordinate position of the device in the picture;
the device picture cutting module 603 is used for cutting and storing the device part in the picture according to the coordinate position of the device in the picture;
a second identification module 604, configured to identify a neural network model by using a pre-trained indicator light, and identify an indicator light on a cut device picture;
the determining module 605 is configured to compare the identified indicator light of the equipment in the machine room with an indicator light of the equipment in the database when the equipment fails, and determine whether the equipment fails.
Example nine
On the basis of the above embodiment, the present embodiment may include the following:
the acquisition module in the embodiment of the present invention may include:
the coordinate acquisition module is used for acquiring the coordinates of the specified machine room equipment based on a user instruction;
and the information acquisition module is used for acquiring the working state information of the specified machine room equipment based on the coordinates.
Further, the first data processing module may include:
the recognition positioning module is used for recognizing the picture and positioning the equipment by adopting a pre-trained equipment recognition neural network model;
and the cutting module cuts the equipment part in the picture based on the positioning information and saves the equipment part as first processing data.
Example ten
Referring to fig. 7, the present embodiment further provides an electronic device 700, where the electronic device 700 includes: at least one processor 701; and a memory 702 communicatively coupled to the at least one processor 701; wherein,
the memory 702 stores instructions executable by the one processor 701 to cause the at least one processor 701 to perform method steps as described in embodiments above.
EXAMPLE eleven
The disclosed embodiments provide a non-volatile computer storage medium having stored thereon computer-executable instructions that may perform the method steps as described in the embodiments above.
It should be noted that the computer readable medium in the present disclosure can be a computer readable signal medium or a computer readable storage medium or any combination of the two. A computer readable storage medium may be, for example, but not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or any combination of the foregoing. More specific examples of the computer readable storage medium may include, but are not limited to: an electrical connection having one or more wires, a portable computer diskette, a hard disk, a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), an optical fiber, a portable compact disc read-only memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing. In the present disclosure, a computer readable storage medium may be any tangible medium that can contain, or store a program for use by or in connection with an instruction execution system, apparatus, or device. In contrast, in the present disclosure, a computer readable signal medium may comprise a propagated data signal with computer readable program code embodied therein, either in baseband or as part of a carrier wave. Such a propagated data signal may take many forms, including, but not limited to, electro-magnetic, optical, or any suitable combination thereof. A computer readable signal medium may also be any computer readable medium that is not a computer readable storage medium and that can communicate, propagate, or transport a program for use by or in connection with an instruction execution system, apparatus, or device. Program code embodied on a computer readable medium may be transmitted using any appropriate medium, including but not limited to: electrical wires, optical cables, RF (radio frequency), etc., or any suitable combination of the foregoing.
The computer readable medium may be embodied in the electronic device; or may exist separately without being assembled into the electronic device.
Computer program code for carrying out operations for aspects of the present disclosure may be written in any combination of one or more programming languages, including an object oriented programming language such as Java, Smalltalk, C + +, and conventional procedural programming languages, such as the "C" programming language or similar programming languages. The program code may execute entirely on the user's computer, partly on the user's computer, as a stand-alone software package, partly on the user's computer and partly on a remote computer or entirely on the remote computer or server. In the case of a remote computer, the remote computer may be connected to the user's computer through any type of network, including a local Area Network (AN) or a Wide Area Network (WAN), or the connection may be made to AN external computer (for example, through the internet using AN internet service provider).
The flowchart and block diagrams in the figures illustrate the architecture, functionality, and operation of possible implementations of systems, methods and computer program products according to various embodiments of the present disclosure. In this regard, each block in the flowchart or block diagrams may represent a module, segment, or portion of code, which comprises one or more executable instructions for implementing the specified logical function(s). It should also be noted that, in some alternative implementations, the functions noted in the block may occur out of the order noted in the figures. For example, two blocks shown in succession may, in fact, be executed substantially concurrently, or the blocks may sometimes be executed in the reverse order, depending upon the functionality involved. It will also be noted that each block of the block diagrams and/or flowchart illustration, and combinations of blocks in the block diagrams and/or flowchart illustration, can be implemented by special purpose hardware-based systems which perform the specified functions or acts, or combinations of special purpose hardware and computer instructions.
The units described in the embodiments of the present disclosure may be implemented by software or hardware. Where the name of an element does not in some cases constitute a limitation on the element itself.
The foregoing describes preferred embodiments of the present invention, and is intended to provide a clear and concise description of the spirit and scope of the invention, and not to limit the same, but to include all modifications, substitutions, and alterations falling within the spirit and scope of the invention as defined by the appended claims.