CN111923042A - Virtualization processing method and system for cabinet grid and inspection robot - Google Patents
Virtualization processing method and system for cabinet grid and inspection robot Download PDFInfo
- Publication number
- CN111923042A CN111923042A CN202010706984.0A CN202010706984A CN111923042A CN 111923042 A CN111923042 A CN 111923042A CN 202010706984 A CN202010706984 A CN 202010706984A CN 111923042 A CN111923042 A CN 111923042A
- Authority
- CN
- China
- Prior art keywords
- camera
- offset
- target
- image information
- mechanical arm
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
- 238000007689 inspection Methods 0.000 title claims abstract description 21
- 238000003672 processing method Methods 0.000 title claims description 21
- 238000012545 processing Methods 0.000 claims description 85
- 238000013135 deep learning Methods 0.000 claims description 27
- 238000005516 engineering process Methods 0.000 claims description 6
- 238000000034 method Methods 0.000 abstract description 18
- 230000008569 process Effects 0.000 abstract description 8
- 238000012549 training Methods 0.000 description 7
- 238000010586 diagram Methods 0.000 description 6
- 230000006870 function Effects 0.000 description 3
- 238000012423 maintenance Methods 0.000 description 3
- 238000012544 monitoring process Methods 0.000 description 3
- 238000000605 extraction Methods 0.000 description 2
- 230000004927 fusion Effects 0.000 description 2
- 238000012360 testing method Methods 0.000 description 2
- 238000012795 verification Methods 0.000 description 2
- 230000005540 biological transmission Effects 0.000 description 1
- 239000003086 colorant Substances 0.000 description 1
- 238000010276 construction Methods 0.000 description 1
- 238000013480 data collection Methods 0.000 description 1
- 230000007547 defect Effects 0.000 description 1
- 238000013461 design Methods 0.000 description 1
- 238000011161 development Methods 0.000 description 1
- 238000009499 grossing Methods 0.000 description 1
- 230000007246 mechanism Effects 0.000 description 1
- 238000012986 modification Methods 0.000 description 1
- 230000004048 modification Effects 0.000 description 1
- 238000006467 substitution reaction Methods 0.000 description 1
- 230000009466 transformation Effects 0.000 description 1
- 238000010200 validation analysis Methods 0.000 description 1
Images
Classifications
-
- B—PERFORMING OPERATIONS; TRANSPORTING
- B25—HAND TOOLS; PORTABLE POWER-DRIVEN TOOLS; MANIPULATORS
- B25J—MANIPULATORS; CHAMBERS PROVIDED WITH MANIPULATION DEVICES
- B25J9/00—Programme-controlled manipulators
- B25J9/16—Programme controls
- B25J9/1602—Programme controls characterised by the control system, structure, architecture
-
- B—PERFORMING OPERATIONS; TRANSPORTING
- B25—HAND TOOLS; PORTABLE POWER-DRIVEN TOOLS; MANIPULATORS
- B25J—MANIPULATORS; CHAMBERS PROVIDED WITH MANIPULATION DEVICES
- B25J9/00—Programme-controlled manipulators
- B25J9/16—Programme controls
- B25J9/1628—Programme controls characterised by the control loop
- B25J9/163—Programme controls characterised by the control loop learning, adaptive, model based, rule based expert control
-
- B—PERFORMING OPERATIONS; TRANSPORTING
- B25—HAND TOOLS; PORTABLE POWER-DRIVEN TOOLS; MANIPULATORS
- B25J—MANIPULATORS; CHAMBERS PROVIDED WITH MANIPULATION DEVICES
- B25J9/00—Programme-controlled manipulators
- B25J9/16—Programme controls
- B25J9/1656—Programme controls characterised by programming, planning systems for manipulators
- B25J9/1664—Programme controls characterised by programming, planning systems for manipulators characterised by motion, path, trajectory planning
-
- B—PERFORMING OPERATIONS; TRANSPORTING
- B25—HAND TOOLS; PORTABLE POWER-DRIVEN TOOLS; MANIPULATORS
- B25J—MANIPULATORS; CHAMBERS PROVIDED WITH MANIPULATION DEVICES
- B25J9/00—Programme-controlled manipulators
- B25J9/16—Programme controls
- B25J9/1694—Programme controls characterised by use of sensors other than normal servo-feedback from position, speed or acceleration sensors, perception control, multi-sensor controlled systems, sensor fusion
- B25J9/1697—Vision controlled systems
Landscapes
- Engineering & Computer Science (AREA)
- Robotics (AREA)
- Mechanical Engineering (AREA)
- Automation & Control Theory (AREA)
- Manipulator (AREA)
Abstract
The invention belongs to the technical field of rail transit, and discloses a method and a system for blurring a cabinet grid and an inspection robot, wherein the blurring method comprises the following steps: step S1: controlling the mechanical arm to drive the first camera to move to a first position according to the work plan; step S2: acquiring and obtaining first image information of a target in the cabinet through a first camera, identifying the target in the cabinet according to the first image information, and obtaining a first offset between the target and the first camera; step S3: obtaining a fourth offset between the second camera and the target according to the first offset, the second offset between the first camera and the mechanical arm and the third offset between the second camera and the mechanical arm, and controlling the mechanical arm to drive the second camera to move to a second position according to the fourth offset; step S4: acquiring and obtaining second image information of the target through a second camera through the grid of the cabinet; therefore, the current state of the target can be identified without opening the cabinet door in the inspection process.
Description
Technical Field
The invention belongs to the technical field of rail transit, and particularly relates to a cabinet grid blurring processing method and system and an inspection robot.
Background
With the rapid development of the Chinese railway construction, the high-speed and high-density running state of the railway train puts more strict requirements on the safety and operation and maintenance management of the railway electric service equipment and system. The high-speed railway section signal relay station is mostly an unattended station, the inconvenience of traffic and the inconvenience and traffic safety hidden danger brought by the night patrol operation to the maintenance and emergency disposal of signal equipment cause that electric service patrol personnel can not completely grasp the operation working condition of the high-speed railway section unattended signal relay station equipment in real time, and the dead zone is likely to appear in the application state of fixed-point monitoring signal equipment.
The intelligent patrol system of the national railway unattended signal relay station mainly completes the automatic patrol monitoring function of a railway electric appliance machine room, monitors and alarms the technical indexes of signal equipment, key equipment and instruments and meters in real time, greatly improves the monitoring and operation and maintenance level of the high-speed railway signal equipment, enhances the security control level of key places of the high-speed railway, strives to compress the equipment fault processing delay time, and protects and drives the high-speed railway for safe operation.
However, it is found in practice that the existing inspection robot does not have the grid blurring capability, can only inspect a cabinet without grid shielding, and can only display pictures taken by a camera, and the type, position and state of an indicator lamp of a board card need to be manually identified. Therefore, the existing inspection robot cannot effectively inspect the indoor cabinet with the grid shape, including the type and the position of the board card and the state of the indicator light, and is lack of an alarm mechanism for fault abnormity. Under the condition, most of the measures which can be taken at present are to directly remove the cabinet door, but the method has the risk of misoperation of unauthorized people.
In addition, the existing inspection robot is only controlled by an industrial personal computer, and the CPU processing capacity of the industrial personal computer is difficult to adapt to the existing complex deep learning algorithm, so that the image processing capacity of the inspection robot is limited.
Therefore, it is urgently needed to develop a method and a system for blurring a cabinet grid and an inspection robot, which overcome the above defects.
Disclosure of Invention
In view of the above problems, the present invention provides a method for blurring a grid of a rack, including:
step S1: controlling the mechanical arm to drive the first camera to move to a first position according to the work plan;
step S2: acquiring and obtaining first image information of a target in a cabinet through the first camera, identifying the target in the cabinet according to the first image information, and obtaining a first offset between the target and the first camera;
step S3: obtaining a fourth offset between a second camera and the target according to the first offset, the second offset between the first camera and the mechanical arm and the third offset between the second camera and the mechanical arm, and controlling the mechanical arm to drive the second camera to move to a second position according to the fourth offset;
step S4: and acquiring and obtaining second image information of the target through the second camera through the grid of the cabinet.
The blurring processing method further includes step S5:
and identifying the current state of the target according to the second image information.
In the blurring processing method, step S1 includes:
step S11: mounting the first camera and the second camera on the robotic arm;
step S12: a calibration module of the main control unit obtains the second offset and the third offset through a camera calibration technology;
step S13: receiving and acquiring the position information of the first position according to the working plan through a processing module of the main control unit;
step S14: and the control module of the main control unit controls the mechanical arm to drive the first camera to move to the first position according to the position information of the first position.
The blurring processing method described above, wherein the step S14 further includes: when the mechanical arm moves, the control module controls the first camera to collect video streams, the first camera outputs the video streams to the processing module, and the processing module obtains and displays real-time position information of the mechanical arm according to the video streams.
In the blurring processing method, step S2 includes:
step S21: the control module outputs a first acquisition instruction to the first camera;
step S22: the first camera acquires and obtains the first image information according to the first acquisition instruction and outputs the first image information to the GPU unit;
step S23: the GPU unit marks the target in the first image information through a deep learning algorithm and obtains target information and the first offset;
step S24: the GPU unit outputs the first image information marked with the target, the target information and the first offset to the processing module.
In the blurring processing method, step S3 includes:
step S31: the processing module obtains the fourth offset according to the first offset, the second offset and the third offset;
step S32: the processing module obtains the position information of the second position according to the fourth offset and outputs the position information to the control module;
step S33: the control module controls the mechanical arm to drive the second camera to move to the second position according to the position information of the second position.
The blurring processing method described above, wherein the step S4 further includes: after the second camera reaches the second position, the control module controls the second camera to acquire and obtain the second image information, and the second camera outputs the second image information to the processing module.
The blurring processing method described above, wherein the step S5 further includes: and the processing module outputs the second image information to a background system, and the background system identifies the current state of the target in the second image information through a deep learning algorithm and a traditional image algorithm.
The invention also provides a system for blurring the grids of the cabinet, which comprises:
the first camera is arranged on the mechanical arm;
the main control unit is electrically connected with the mechanical arm and the first camera, and controls the first camera to acquire and obtain first image information of a target in the cabinet after the mechanical arm is controlled by the main control unit according to a work plan to drive the first camera to move to a first position;
the GPU unit is used for identifying a target in the cabinet according to the first image information and acquiring a first offset between the target and the first camera;
the second camera is arranged on the mechanical arm, the main control unit obtains a fourth offset between the second camera and the target according to the first offset, the second offset between the first camera and the mechanical arm and the third offset between the second camera and the mechanical arm, the main control unit controls the mechanical arm to drive the second camera to move to a second position according to the fourth offset, and the main control unit controls the second camera to acquire second image information of the target through the grid of the cabinet.
The blurring processing system further includes a background system, and identifies the current state of the target according to the second image information.
In the above virtualization processing system, the main control unit includes:
the calibration module calibrates the first camera and the second camera through a camera calibration technology to obtain the second offset and the third offset;
the processing module is used for receiving and acquiring the position information of the first position according to the work plan;
the control module receives the position information of the first position output by the processing module, and controls the mechanical arm to drive the first camera to move to the first position according to the position information of the first position.
In the blurring processing system, when the mechanical arm moves, the control module controls the first camera to collect a video stream, the first camera outputs the video stream to the processing module, and the processing module obtains real-time position information of the mechanical arm according to the video stream.
In the blurring processing system, after the first camera reaches the first position, the control module outputs a first acquisition instruction to the first camera, the first camera acquires the first image information according to the first acquisition instruction and outputs the first image information to the GPU unit, the GPU unit marks the target in the first image information through a deep learning algorithm and obtains target information and the first offset, and the GPU unit outputs the first image information marked with the target, the target information, and the first offset to the processing module.
In the blurring processing system, the processing module obtains the fourth offset according to the first offset, the second offset, and the third offset, the processing module obtains the position information of the second position according to the fourth offset and outputs the position information to the control module, and the control module controls the mechanical arm to drive the second camera to move to the second position according to the position information of the second position.
In the blurring processing system, after the second camera reaches the second position, the control module outputs a second acquisition instruction to the second camera, the second camera acquires and obtains the second image information, and the second camera outputs the second image information to the processing module.
In the blurring processing system, the processing module outputs the second image information to a background system, and the background system identifies the current state of the target in the second image information through a deep learning algorithm and a conventional image algorithm.
The invention also provides a patrol robot, which comprises:
a mechanical arm;
the virtualization processing system of any one of the above, the virtualization processing system connect in the arm, the robot that patrols and examines passes through the arm reaches the current state that the virtualization processing system gathered and discerned the target that is sheltered from by the rack grid.
Aiming at the prior art, the invention has the following effects:
the GPU unit is arranged in the inspection robot, so that the inspection robot can run a deep learning algorithm with larger network and higher precision, and the real-time processing of video streams with high resolution and high frame number is realized; meanwhile, the invention utilizes the depth camera, and the deep learning algorithm is matched with the mechanical arm, so that the grid blurring function is realized, and the current state of the target can be identified without opening the cabinet door in the inspection process.
Additional features and advantages of the invention will be set forth in the description which follows, and in part will be obvious from the description, or may be learned by practice of the invention. The objectives and other advantages of the invention will be realized and attained by the structure particularly pointed out in the written description and claims hereof as well as the appended drawings.
Drawings
In order to more clearly illustrate the embodiments of the present invention or the technical solutions in the prior art, the drawings used in the description of the embodiments or the prior art will be briefly described below, and it is obvious that the drawings in the following description are some embodiments of the present invention, and those skilled in the art can also obtain other drawings according to the drawings without creative efforts.
FIG. 1 is a flow chart of a blurring processing method according to the present invention;
FIG. 2 is a flowchart illustrating the substeps of step S1 in FIG. 1;
FIG. 3 is a flowchart illustrating the substeps of step S2 in FIG. 1;
FIG. 4 is a flowchart illustrating the substeps of step S3 in FIG. 1;
FIG. 5 is a schematic diagram of a virtualization processing system according to the present invention;
FIG. 6 is a block diagram of a cabinet;
fig. 7 is an identification diagram.
Wherein the reference numerals are:
a first camera: 11
The main control unit: 12
GPU unit: 13
A second camera: 14
A background system: 15
A calibration module: 121
A processing module: 122
A control module: 123
Mechanical arm: 21
Detailed Description
In order to make the objects, technical solutions and advantages of the embodiments of the present invention clearer, the technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are some, but not all, embodiments of the present invention. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
As used herein, the terms "comprising," "including," "having," "containing," and the like are open-ended terms that mean including, but not limited to.
The exemplary embodiments of the present invention and the description thereof are provided to explain the present invention and not to limit the present invention. Additionally, the same or similar numbered elements/components used in the drawings and the embodiments are used to represent the same or similar parts.
The invention provides a method for accurately identifying the type and the position of a board card behind a grid of a cabinet by using a deep learning method, and a clear picture of a board card indicator lamp behind the grid blurring is obtained by using an industrial camera, so that inspection without disassembling the cabinet door is realized, and the state of the board card indicator lamp is identified.
Referring to fig. 1, fig. 1 is a flow chart of a blurring processing method according to the present invention. As shown in fig. 1, the virtualization processing method of the cabinet grid of the present invention includes:
step S1: and controlling the mechanical arm to drive the first camera to move to the first position according to the work plan.
In this embodiment, the robot arm may be a multi-axis robot arm, or may be an xyz platform or other similar product.
Referring to fig. 2, fig. 2 is a flowchart illustrating a sub-step of step S1 in fig. 1. As shown in fig. 2, the step S1 includes:
step S11: mounting the first camera and the second camera on the robotic arm.
In the present embodiment, it is preferable that the first camera is a depth camera and the second camera is an industrial camera, but the present invention is not limited thereto. Specifically, a depth camera and an industrial camera are fixed at the front end of a mechanical arm, and the relative positions of the depth camera and the industrial camera and the mechanical arm are fixed.
Step S12: and the calibration module of the main control unit obtains the second offset and the third offset by a camera calibration technology.
Wherein, in this embodiment, the main control unit can be for setting up alone, or the main control unit also can be for patrolling and examining the industrial computer of robot.
Specifically, after the camera is installed, the calibration module calibrates the camera through a camera calibration technology to obtain a second offset between the depth camera and the mechanical arm and a third offset between the industrial camera and the mechanical arm.
For example, a calibration board is prepared first, and the calibration board and the robot arm base are kept unchanged during the camera calibration process. The mechanical arm is adjusted, so that the camera shoots the calibration plate at different positions, and the whole calibration plate is ensured to be in the shot picture.
Because the calibration plate and the mechanical arm base in the multiple groups of data are kept unchanged,meanwhile, the relative position of the camera and the mechanical arm terminal is unchanged and is set as T. There are two sets of dataFurther transformation can obtainThe relative position, i.e. the offset, of the camera and the mechanical arm terminal can be obtained by using a calibration algorithm such as Tsai-Lenz.
Step S13: and receiving and obtaining the position information of the first position according to the working plan through a processing module of the main control unit.
Specifically, after receiving a work plan, for example, a patrol work plan, the processing module analyzes the work plan to obtain the position information of the cabinet to be detected and the position information of the first position acquired by the first camera.
Step S14: and the control module of the main control unit controls the mechanical arm to drive the first camera to move to the first position according to the position information of the first position.
Specifically, referring to fig. 6, fig. 6 is a physical diagram of the cabinet. The control module controls the robot body to move to a position at a certain distance from the cabinet and keep the robot body still according to the position information of the cabinet to be detected, and the control module controls the movable mechanical arm according to the position information of the first position, so that the first camera is perpendicular to the cabinet and fixed at a certain position at a certain distance from the cabinet, namely fixed at the first position.
Wherein, the step S14 further includes: when the mechanical arm moves, the control module controls the first camera to collect video streams, the first camera outputs the video streams to the processing module, and the processing module obtains and displays real-time position information of the mechanical arm according to the video streams. Specifically speaking, in the process of patrolling and examining the robot, the depth camera keeps open, and video stream is constantly gathered to can acquire the positional information of arm in real time, prevent that it from touching other equipment in computer lab.
Step S2: the method comprises the steps of acquiring and obtaining first image information of a target in a cabinet through a first camera, identifying the target in the cabinet according to the first image information, and obtaining a first offset between the target and the first camera, wherein the first offset comprises x offset, y offset and z offset.
In this embodiment, the target is a board card in the cabinet, and in other embodiments, the target may also be a switch.
Referring to fig. 3, fig. 3 is a flowchart illustrating a substep of step S2 in fig. 1. As shown in fig. 3, the step S2 includes:
step S21: the control module outputs a first acquisition instruction to the first camera;
step S22: the first camera acquires and obtains the first image information according to the first acquisition instruction and outputs the first image information to the GPU unit;
step S23: the GPU unit marks the target in the first image information through a deep learning algorithm and obtains target information and the first offset;
step S24: the GPU unit outputs the first image information marked with the target, the target information and the first offset to the processing module.
Specifically, please refer to fig. 7, and fig. 7 is an identification diagram. After the first camera moves to the first position, the control module outputs a first acquisition instruction to the first camera, the first camera acquires and obtains first image information according to the first acquisition instruction, the first camera is connected with the GPU module, the first camera sends the first image information to the GPU unit after the first image information is obtained, the GPU unit frames a target detected in the first image information in a rectangular frame or a polygonal frame through a trained deep learning algorithm (as shown in figure 7), the first image information is marked with target information and a first offset, the target information comprises a name of the target, and the first offset is relative position information of the target. The GPU unit outputs the first image information, the target information and the first offset of the marked target to the processing module in a wired or wireless transmission mode.
The deep learning algorithm of the invention marks different types of board cards by using frames with different colors, and simultaneously gives the names and the position information of the board cards, thereby realizing the simultaneous identification of one board card or a plurality of board cards, and when a plurality of board cards are identified, the plurality of board cards can be of the same type or different types.
The deep learning algorithm is trained through a large amount of data, and therefore pictures of a plurality of manufacturers and equipment, grids of different cabinets and board cards are collected. The data collection amount is over ten thousand. And (3) calibrating the original data, training on a GPU server in a laboratory, finally obtaining a trained deep learning algorithm model, and deploying the trained deep learning algorithm model to a GPU module of the inspection robot. The training process can be roughly divided into four stages of data acquisition, data calibration, algorithm design, training and verification. Where the data is labeled as names of different boards and switches. The data set is divided into a training set, a validation set and a test set. The training set is introduced into the deep learning network, parameters are continuously updated, designed loss functions are converged, overfitting is avoided, the verification set is used for detecting the training state in the training process, the trained deep learning network is tested by the test set, and retraining is needed when the effect is poor.
Step S3: and obtaining a fourth offset between the second camera and the target according to the first offset, the second offset between the first camera and the mechanical arm and the third offset between the second camera and the mechanical arm, and controlling the mechanical arm to drive the second camera to move to a second position according to the fourth offset.
Referring to fig. 4, fig. 4 is a flowchart illustrating a sub-step of step S3 in fig. 1. As shown in fig. 4, the step S3 includes:
step S31: the processing module obtains the fourth offset according to the first offset, the second offset and the third offset;
step S32: the processing module obtains the position information of the second position according to the fourth offset and outputs the position information to the control module;
step S33: the control module controls the mechanical arm to drive the second camera to move to the second position according to the position information of the second position.
Step S4: and acquiring and obtaining second image information of the target through the second camera through the grid of the cabinet.
Wherein, the step S4 further includes: after the second camera reaches the second position, the control module controls the second camera to acquire and obtain the second image information, and the second camera outputs the second image information to the processing module.
Step S5: and identifying the current state of the target according to the second image information.
Wherein, the step S5 further includes: and the processing module outputs the second image information to a background system, and the background system identifies the current state of the target in the second image information through a deep learning algorithm and a traditional image algorithm.
Specifically, the robot arm controls the industrial camera to move to a specified position according to the coordinate value given by the first camera. And after the industrial camera reaches the designated position, shooting by the industrial camera. And the photographed second image information is directly transmitted to the background system without splicing processing. And the acquired second image information is sent to a background system by the processing module, and the background system identifies the state of the board card indicator lamp after the grid is virtualized by utilizing a deep learning algorithm and a traditional image algorithm.
The background system firstly carries out a splicing process, wherein image splicing mainly comprises feature point extraction and matching, image registration and image fusion, the feature point extraction is such as SIFT algorithm and SRUF algorithm, the image registration is such as RANSAC algorithm, and the image fusion is such as weighting smoothing algorithm. We have tested a number of algorithms and selected the best one. The identification of the indicator light is also realized by a trained deep learning algorithm, and in addition, the state of the indicator light is realized by a traditional image algorithm, such as HSV and the like, but the invention is not limited to the method.
Referring to fig. 5, fig. 5 is a schematic structural diagram of a virtualization processing system according to the present invention. As shown in fig. 5, the blurring processing system of the present invention includes:
a first camera 11 mounted on the robot arm 21;
the main control unit 12 is electrically connected to the mechanical arm 21 and the first camera 11, and after the main control unit 12 controls the mechanical arm 21 to drive the first camera 11 to move to a first position according to a work plan, the main control unit 12 controls the first camera 11 to acquire and obtain first image information of a target in the cabinet;
the GPU unit 13 is used for identifying a target in the cabinet according to the first image information and obtaining a first offset between the target and the first camera 11;
the second camera 14 is mounted on the mechanical arm 21, the main control unit 12 obtains a fourth offset between the second camera 14 and the target according to the first offset, the second offset between the first camera 11 and the mechanical arm 21, and the third offset between the second camera 14 and the mechanical arm 21, the main control unit 12 controls the mechanical arm 21 to drive the second camera 13 to move to a second position according to the fourth offset, and the main control unit 12 controls the second camera 14 to acquire second image information of the target through a grid of the cabinet.
In this embodiment, the GPU unit 13 may be independently arranged, or may be integrated on the main control unit 12.
Further, a background system 15 is included for identifying the current state of the target according to the second image information.
Wherein, the main control unit 12 includes:
a calibration module 121, configured to calibrate the first camera and the second camera by using a camera calibration technique to obtain the second offset and the third offset;
the processing module 122 receives and obtains the position information of the first position according to the work plan;
the control module 123 receives the position information of the first position output by the processing module, and controls the mechanical arm to drive the first camera to move to the first position according to the position information of the first position.
When the mechanical arm 21 moves, the control module 123 controls the first camera 11 to capture a video stream, the first camera 11 outputs the video stream to the processing module 122, and the processing module 122 obtains real-time position information of the mechanical arm 21 according to the video stream.
After the first camera 11 reaches the first position, the control module 123 outputs a first acquisition instruction to the first camera 11, the first camera 11 acquires the first image information according to the first acquisition instruction and outputs the first image information to the GPU unit 13, the GPU unit 13 marks the target in the first image information through a deep learning algorithm and obtains target information and the first offset, and the GPU unit 13 outputs the first image information, the target information, and the first offset marked with the target to the processing module 122.
The processing module 122 obtains the fourth offset according to the first offset, the second offset, and the third offset, the processing module obtains the position information of the second position according to the fourth offset and outputs the position information to the control module 123, and the control module 123 controls the mechanical arm 21 to drive the second camera 14 to move to the second position according to the position information of the second position.
After the second camera 14 reaches the second position, the control module 123 outputs a second acquisition instruction to the second camera 14, the second camera 14 acquires and obtains the second image information, the second camera 14 outputs the second image information to the processing module 122, the processing module 122 outputs the second image information to the background system 15, and the background system 15 identifies the current state of the target in the second image information through a deep learning algorithm and a conventional image algorithm.
The present invention also provides a patrol robot, comprising: arm 21 and the virtual processing system among the aforesaid, virtual processing system connect in the arm, it passes through to patrol and examine the robot the arm reaches virtual processing system gathers and discerns the current state of the target that is sheltered from by the rack grid.
In conclusion, the GPU unit is placed in the inspection robot to process the image and video information, so that the image processing capability of the inspection robot is greatly improved, the inspection robot can run a deep learning algorithm with a larger network and higher precision, the real-time processing of video streams with high resolution and high frame number is realized, and the problem of complex image video can be solved; meanwhile, identifying different board cards of different cabinets in the railway signal room based on a deep learning algorithm, wherein the board cards comprise board card types and positions; in addition, due to the combination of the depth camera and the mechanical arm, the virtualization treatment of the cabinet grid by the inspection robot is realized, and the state of the panel card indicator lamp in the cabinet can be recognized without opening the door.
Although the present invention has been described in detail with reference to the foregoing embodiments, it will be understood by those of ordinary skill in the art that: the technical solutions described in the foregoing embodiments may still be modified, or some technical features may be equivalently replaced; and such modifications or substitutions do not depart from the spirit and scope of the corresponding technical solutions of the embodiments of the present invention.
Claims (17)
1. A cabinet grid blurring processing method is characterized by comprising the following steps:
step S1: controlling the mechanical arm to drive the first camera to move to a first position according to the work plan;
step S2: acquiring and obtaining first image information of a target in a cabinet through the first camera, identifying the target in the cabinet according to the first image information, and obtaining a first offset between the target and the first camera;
step S3: obtaining a fourth offset between a second camera and the target according to the first offset, the second offset between the first camera and the mechanical arm and the third offset between the second camera and the mechanical arm, and controlling the mechanical arm to drive the second camera to move to a second position according to the fourth offset;
step S4: and acquiring and obtaining second image information of the target through the second camera through the grid of the cabinet.
2. A blurring processing method according to claim 1, further comprising step S5:
and identifying the current state of the target according to the second image information.
3. The blurring processing method as claimed in claim 2, wherein said step S1 includes:
step S11: mounting the first camera and the second camera on the robotic arm;
step S12: a calibration module of the main control unit obtains the second offset and the third offset through a camera calibration technology;
step S13: receiving and acquiring the position information of the first position according to the working plan through a processing module of the main control unit;
step S14: and the control module of the main control unit controls the mechanical arm to drive the first camera to move to the first position according to the position information of the first position.
4. A blurring processing method as claimed in claim 3, wherein said step S14 further comprises: when the mechanical arm moves, the control module controls the first camera to collect video streams, the first camera outputs the video streams to the processing module, and the processing module obtains and displays real-time position information of the mechanical arm according to the video streams.
5. A blurring processing method as claimed in claim 3, wherein said step S2 includes:
step S21: the control module outputs a first acquisition instruction to the first camera;
step S22: the first camera acquires and obtains the first image information according to the first acquisition instruction and outputs the first image information to the GPU unit;
step S23: the GPU unit marks the target in the first image information through a deep learning algorithm and obtains target information and the first offset;
step S24: the GPU unit outputs the first image information marked with the target, the target information and the first offset to the processing module.
6. The blurring processing method as claimed in claim 5, wherein said step S3 includes:
step S31: the processing module obtains the fourth offset according to the first offset, the second offset and the third offset;
step S32: the processing module obtains the position information of the second position according to the fourth offset and outputs the position information to the control module;
step S33: the control module controls the mechanical arm to drive the second camera to move to the second position according to the position information of the second position.
7. A blurring processing method as claimed in claim 6, wherein said step S4 further comprises: after the second camera reaches the second position, the control module controls the second camera to acquire and obtain the second image information, and the second camera outputs the second image information to the processing module.
8. A blurring processing method as claimed in claim 7, wherein said step S5 further comprises: and the processing module outputs the second image information to a background system, and the background system identifies the current state of the target in the second image information through a deep learning algorithm and a traditional image algorithm.
9. A system for blurring a grid of a cabinet, comprising:
the first camera is arranged on the mechanical arm;
the main control unit is electrically connected with the mechanical arm and the first camera, and controls the first camera to acquire and obtain first image information of a target in the cabinet after the mechanical arm is controlled by the main control unit according to a work plan to drive the first camera to move to a first position;
the GPU unit is used for identifying a target in the cabinet according to the first image information and acquiring a first offset between the target and the first camera;
the second camera is arranged on the mechanical arm, the main control unit obtains a fourth offset between the second camera and the target according to the first offset, the second offset between the first camera and the mechanical arm and the third offset between the second camera and the mechanical arm, the main control unit controls the mechanical arm to drive the second camera to move to a second position according to the fourth offset, and the main control unit controls the second camera to acquire second image information of the target through the grid of the cabinet.
10. A blurring processing system according to claim 9, further comprising a background system for identifying a current state of the object based on the second image information.
11. A virtualization processing system as claimed in claim 9 or 10 wherein the master unit comprises:
the calibration module calibrates the first camera and the second camera through a camera calibration technology to obtain the second offset and the third offset;
the processing module is used for receiving and acquiring the position information of the first position according to the work plan;
the control module receives the position information of the first position output by the processing module, and controls the mechanical arm to drive the first camera to move to the first position according to the position information of the first position.
12. A blurring processing system according to claim 11, wherein when said robot arm moves, said control module controls said first camera to capture a video stream, said first camera outputs said video stream to said processing module, and said processing module obtains real-time position information of said robot arm from said video stream.
13. The blurring processing system of claim 11, wherein the control module outputs a first capture command to the first camera after the first camera reaches the first position, the first camera captures the first image information according to the first capture command and outputs the first image information to the GPU unit, the GPU unit marks the target in the first image information through a deep learning algorithm and obtains target information and the first offset, and the GPU unit outputs the first image information, the target information, and the first offset marked with the target to the processing module.
14. The blurring processing system according to claim 13, wherein the processing module obtains the fourth offset according to the first offset, the second offset, and the third offset, the processing module obtains position information of the second position according to the fourth offset and outputs the position information to the control module, and the control module controls the robot arm to drive the second camera to move to the second position according to the position information of the second position.
15. A blurring processing system according to claim 14, wherein after the second camera reaches the second position, the control module outputs a second capture instruction to the second camera, the second camera captures and obtains the second image information, and the second camera outputs the second image information to the processing module.
16. The blurring processing system of claim 15 wherein the processing module outputs the second image information to a back-end system that identifies a current state of the object in the second image information through a deep learning algorithm and a conventional image algorithm.
17. An inspection robot, comprising:
a mechanical arm;
the virtualization processing system of any one of the preceding claims 9-16, wherein the virtualization processing system is connected to the robotic arm, and the inspection robot collects and identifies a current state of an object occluded by the cabinet grid through the robotic arm and the virtualization processing system.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202010706984.0A CN111923042B (en) | 2020-07-21 | 2020-07-21 | Virtualization processing method and system for cabinet grid and inspection robot |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202010706984.0A CN111923042B (en) | 2020-07-21 | 2020-07-21 | Virtualization processing method and system for cabinet grid and inspection robot |
Publications (2)
Publication Number | Publication Date |
---|---|
CN111923042A true CN111923042A (en) | 2020-11-13 |
CN111923042B CN111923042B (en) | 2022-05-24 |
Family
ID=73314353
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202010706984.0A Active CN111923042B (en) | 2020-07-21 | 2020-07-21 | Virtualization processing method and system for cabinet grid and inspection robot |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN111923042B (en) |
Cited By (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN114040431A (en) * | 2021-10-08 | 2022-02-11 | 中国联合网络通信集团有限公司 | Network testing method, device, equipment and storage medium |
Citations (19)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
EP2361735A2 (en) * | 2010-02-26 | 2011-08-31 | Agilent Technologies, Inc. | Robot arm and method of controlling robot arm to avoid collisions |
CN103759716A (en) * | 2014-01-14 | 2014-04-30 | 清华大学 | Dynamic target position and attitude measurement method based on monocular vision at tail end of mechanical arm |
CN105631875A (en) * | 2015-12-25 | 2016-06-01 | 广州视源电子科技股份有限公司 | Method and system for determining mapping relation between camera coordinates and manipulator paw coordinates |
CN108189043A (en) * | 2018-01-10 | 2018-06-22 | 北京飞鸿云际科技有限公司 | A kind of method for inspecting and crusing robot system applied to high ferro computer room |
US20180361589A1 (en) * | 2017-06-16 | 2018-12-20 | Robotiq Inc. | Robotic arm camera system and method |
CN109635875A (en) * | 2018-12-19 | 2019-04-16 | 浙江大学滨海产业技术研究院 | A kind of end-to-end network interface detection method based on deep learning |
CN109785388A (en) * | 2018-12-28 | 2019-05-21 | 东南大学 | A kind of short distance precise relative positioning method based on binocular camera |
CN110246175A (en) * | 2019-05-24 | 2019-09-17 | 国网安徽省电力有限公司检修分公司 | Intelligent Mobile Robot image detecting system and method for the panorama camera in conjunction with holder camera |
CN110315500A (en) * | 2019-07-01 | 2019-10-11 | 广州弘度信息科技有限公司 | A kind of double mechanical arms crusing robot and its method accurately opened the door |
CN110399831A (en) * | 2019-07-25 | 2019-11-01 | 中国银联股份有限公司 | A kind of method for inspecting and device |
CN110490854A (en) * | 2019-08-15 | 2019-11-22 | 中国工商银行股份有限公司 | Obj State detection method, Obj State detection device and electronic equipment |
CN110497373A (en) * | 2019-08-07 | 2019-11-26 | 大连理工大学 | A kind of combined calibrating method between the three-dimensional laser radar and mechanical arm of Mobile working machine people |
CN110614638A (en) * | 2019-09-19 | 2019-12-27 | 国网山东省电力公司电力科学研究院 | Transformer substation inspection robot autonomous acquisition method and system |
CN110648319A (en) * | 2019-09-19 | 2020-01-03 | 国网山东省电力公司电力科学研究院 | Equipment image acquisition and diagnosis system and method based on double cameras |
US20200027371A1 (en) * | 2018-07-19 | 2020-01-23 | Icon Corp. | Learning toy, mobile body for learning toy, panel for learning toy, and portable information processing terminal for learning toy |
US20200061839A1 (en) * | 2016-02-09 | 2020-02-27 | Cobalt Robotics Inc. | Inventory management by mobile robot |
CN110909653A (en) * | 2019-11-18 | 2020-03-24 | 南京七宝机器人技术有限公司 | Method for automatically calibrating screen cabinet of distribution room by indoor robot |
CN111145211A (en) * | 2019-12-05 | 2020-05-12 | 大连民族大学 | Monocular camera upright pedestrian head pixel height acquisition method |
CN111427320A (en) * | 2020-04-03 | 2020-07-17 | 无锡超维智能科技有限公司 | Intelligent industrial robot distributed unified scheduling management platform |
-
2020
- 2020-07-21 CN CN202010706984.0A patent/CN111923042B/en active Active
Patent Citations (19)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
EP2361735A2 (en) * | 2010-02-26 | 2011-08-31 | Agilent Technologies, Inc. | Robot arm and method of controlling robot arm to avoid collisions |
CN103759716A (en) * | 2014-01-14 | 2014-04-30 | 清华大学 | Dynamic target position and attitude measurement method based on monocular vision at tail end of mechanical arm |
CN105631875A (en) * | 2015-12-25 | 2016-06-01 | 广州视源电子科技股份有限公司 | Method and system for determining mapping relation between camera coordinates and manipulator paw coordinates |
US20200061839A1 (en) * | 2016-02-09 | 2020-02-27 | Cobalt Robotics Inc. | Inventory management by mobile robot |
US20180361589A1 (en) * | 2017-06-16 | 2018-12-20 | Robotiq Inc. | Robotic arm camera system and method |
CN108189043A (en) * | 2018-01-10 | 2018-06-22 | 北京飞鸿云际科技有限公司 | A kind of method for inspecting and crusing robot system applied to high ferro computer room |
US20200027371A1 (en) * | 2018-07-19 | 2020-01-23 | Icon Corp. | Learning toy, mobile body for learning toy, panel for learning toy, and portable information processing terminal for learning toy |
CN109635875A (en) * | 2018-12-19 | 2019-04-16 | 浙江大学滨海产业技术研究院 | A kind of end-to-end network interface detection method based on deep learning |
CN109785388A (en) * | 2018-12-28 | 2019-05-21 | 东南大学 | A kind of short distance precise relative positioning method based on binocular camera |
CN110246175A (en) * | 2019-05-24 | 2019-09-17 | 国网安徽省电力有限公司检修分公司 | Intelligent Mobile Robot image detecting system and method for the panorama camera in conjunction with holder camera |
CN110315500A (en) * | 2019-07-01 | 2019-10-11 | 广州弘度信息科技有限公司 | A kind of double mechanical arms crusing robot and its method accurately opened the door |
CN110399831A (en) * | 2019-07-25 | 2019-11-01 | 中国银联股份有限公司 | A kind of method for inspecting and device |
CN110497373A (en) * | 2019-08-07 | 2019-11-26 | 大连理工大学 | A kind of combined calibrating method between the three-dimensional laser radar and mechanical arm of Mobile working machine people |
CN110490854A (en) * | 2019-08-15 | 2019-11-22 | 中国工商银行股份有限公司 | Obj State detection method, Obj State detection device and electronic equipment |
CN110614638A (en) * | 2019-09-19 | 2019-12-27 | 国网山东省电力公司电力科学研究院 | Transformer substation inspection robot autonomous acquisition method and system |
CN110648319A (en) * | 2019-09-19 | 2020-01-03 | 国网山东省电力公司电力科学研究院 | Equipment image acquisition and diagnosis system and method based on double cameras |
CN110909653A (en) * | 2019-11-18 | 2020-03-24 | 南京七宝机器人技术有限公司 | Method for automatically calibrating screen cabinet of distribution room by indoor robot |
CN111145211A (en) * | 2019-12-05 | 2020-05-12 | 大连民族大学 | Monocular camera upright pedestrian head pixel height acquisition method |
CN111427320A (en) * | 2020-04-03 | 2020-07-17 | 无锡超维智能科技有限公司 | Intelligent industrial robot distributed unified scheduling management platform |
Non-Patent Citations (1)
Title |
---|
杨宏: "《载人航天器技术》", 31 May 2018 * |
Cited By (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN114040431A (en) * | 2021-10-08 | 2022-02-11 | 中国联合网络通信集团有限公司 | Network testing method, device, equipment and storage medium |
CN114040431B (en) * | 2021-10-08 | 2023-05-26 | 中国联合网络通信集团有限公司 | Network testing method, device, equipment and storage medium |
Also Published As
Publication number | Publication date |
---|---|
CN111923042B (en) | 2022-05-24 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN110826538B (en) | Abnormal off-duty identification system for electric power business hall | |
CN109271872B (en) | Device and method for judging on-off state and diagnosing fault of high-voltage isolating switch | |
CN102616613B (en) | Elevator monitoring system | |
CN113225387B (en) | Visual monitoring method and system for machine room | |
CN111539313A (en) | Examination cheating behavior detection method and system | |
CN109298785A (en) | A kind of man-machine joint control system and method for monitoring device | |
CN103152601A (en) | Intelligent failure-reporting camera and network management client system thereof | |
CN211720329U (en) | Intelligent monitoring system for power distribution room | |
CN112437255A (en) | Intelligent video monitoring system and method for nuclear power plant | |
CN109544870B (en) | Alarm judgment method for intelligent monitoring system and intelligent monitoring system | |
JP2017069963A (en) | Data collection system of display panel and operation panel, device, method, program and recording medium | |
CN112564291A (en) | Power equipment pressing plate state monitoring system and monitoring method | |
CN111923042B (en) | Virtualization processing method and system for cabinet grid and inspection robot | |
CN111951161B (en) | Target identification method and system and inspection robot | |
CN110247328A (en) | Position judging method based on image recognition in switchgear | |
CN102340179B (en) | Network-based multi-station monitoring integrated matrix display control system | |
CN111917978B (en) | Adjusting device and method of industrial camera and shooting device | |
CN114387542A (en) | Video acquisition unit abnormity identification system based on portable ball arrangement and control | |
CN113044694A (en) | Construction site elevator people counting system and method based on deep neural network | |
CN216530725U (en) | Substation monitoring system oriented to maintenance patrol center | |
CN116311034A (en) | Robot inspection system based on contrast detection | |
CN112085654B (en) | Configurable analog screen recognition system | |
CN211149431U (en) | Intelligent screen cabinet | |
CN113780224A (en) | Transformer substation unmanned inspection method and system | |
CN209929831U (en) | Switch cabinet with image recognition position |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |