[go: up one dir, main page]
More Web Proxy on the site http://driver.im/

CN115562263A - Multi-robot distributed cooperative patrol method, device and system based on ROS - Google Patents

Multi-robot distributed cooperative patrol method, device and system based on ROS Download PDF

Info

Publication number
CN115562263A
CN115562263A CN202211193074.2A CN202211193074A CN115562263A CN 115562263 A CN115562263 A CN 115562263A CN 202211193074 A CN202211193074 A CN 202211193074A CN 115562263 A CN115562263 A CN 115562263A
Authority
CN
China
Prior art keywords
robot
patrol
target
navigation
robots
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202211193074.2A
Other languages
Chinese (zh)
Inventor
吴以
冯家豪
周芷聪
梁鑫
张智豪
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Xinghang Electromechanical Equipment Co Ltd
Original Assignee
Beijing Xinghang Electromechanical Equipment Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Xinghang Electromechanical Equipment Co Ltd filed Critical Beijing Xinghang Electromechanical Equipment Co Ltd
Priority to CN202211193074.2A priority Critical patent/CN115562263A/en
Publication of CN115562263A publication Critical patent/CN115562263A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G05CONTROLLING; REGULATING
    • G05DSYSTEMS FOR CONTROLLING OR REGULATING NON-ELECTRIC VARIABLES
    • G05D1/00Control of position, course, altitude or attitude of land, water, air or space vehicles, e.g. using automatic pilots
    • G05D1/02Control of position or course in two dimensions
    • G05D1/021Control of position or course in two dimensions specially adapted to land vehicles
    • G05D1/0212Control of position or course in two dimensions specially adapted to land vehicles with means for defining a desired trajectory
    • G05D1/0221Control of position or course in two dimensions specially adapted to land vehicles with means for defining a desired trajectory involving a learning process

Landscapes

  • Engineering & Computer Science (AREA)
  • Aviation & Aerospace Engineering (AREA)
  • Radar, Positioning & Navigation (AREA)
  • Remote Sensing (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Automation & Control Theory (AREA)
  • Control Of Position, Course, Altitude, Or Attitude Of Moving Bodies (AREA)

Abstract

The invention relates to a multi-robot distributed cooperative patrol method, device and system based on ROS; the method comprises the following steps: the multiple robots form a robot formation, and the robot formation starts from an initial position and drives to a patrol area; the piloting robot drives from an initial position to a patrol area along a planned path and provides position navigation for the following robot; the following robot navigates to a patrol area through the position navigation, and keeps a set formation with the piloting robot in the driving process; after the robots form a team and arrive at a patrol area, the robots perform distributed traversing patrol; when the robot recognizes the task target, guiding other robots in the formation to drive to the target position together; firstly, a robot for recognizing a task target is set as a piloting robot for formation, the other robots are changed into following robots, and the piloting robot issues target depth information to the following robots; each following robot travels to a target position along a respective planned path. The invention realizes multi-machine cooperative patrol.

Description

ROS-based multi-robot distributed cooperative patrol method, device and system
Technical Field
The invention belongs to the technical field of autonomous navigation of robots, and particularly relates to a multi-robot distributed cooperative patrol method, device and system based on ROS.
Background
Patrolling and enclosure is the act of frequently traveling to and through a designated area and identifying and enclosing objects that may be present in order to protect or supervise the area. With the continuous expansion of the scale and the quantity of large-scale people flow and logistics places such as stations, warehouses and the like, and the implementation of tasks such as reconnaissance and the like in complex environments with harmful substances and severe conditions, the traditional manual patrol-enclosure mode cannot timely cope with the increasing potential safety hazards, so that the automation requirement of patrol-enclosure tasks is urgent day by day, and the patrol by using a robot instead of human is a feasible scheme for solving the current situation.
However, it should be noted that these robots are all used to perform patrol tasks independently, and the task of using one robot to perform a complex environment or a wide range of environments may result in the task being incomplete and unable to perform a containment function. Due to the characteristics of multi-robot parallelism, expandability and the like, the method has good application prospect in leading a plurality of machines into patrol tasks.
Disclosure of Invention
In view of the above analysis, the invention aims to disclose a multi-robot distributed cooperative patrol method, device and system based on ROS, and solve the problem of multi-robot cooperative patrol.
The invention discloses a ROS-based multi-robot distributed cooperative patrol method, which comprises the following steps:
s1, forming a robot formation by a plurality of robots, and starting from an initial position to drive to a patrol area;
when the robot drives to a patrol area, the piloting robot drives to the patrol area from an initial position along a planned path and provides position navigation for the following robot; the following robot navigates to a patrol area through the position navigation, and keeps a set formation with the piloting robot in the driving process;
s2, after the robots form a formation to reach a patrol area, carrying out distributed traversing patrol on a plurality of robots in respective patrol subareas;
s3, in the process of traversing patrol, after the robot recognizes a task target, guiding other robots in the formation to drive to the target position together;
the method comprises the following steps that a robot which firstly identifies a task target is set as a formation pilot robot, other robots are changed into following robots, and the pilot robot issues target depth information to the following robots; each following robot travels along a respective planned path to a target position.
Furthermore, when the formation robot runs to the set target enclosing range in the process of running to the target position, the multi-robot encloses the target in an enclosing formation under the position navigation of the pilot robot.
Further, in the process of driving to a patrol area or in the process of surrounding a target, a piloting-following consistency cooperative control method is adopted for maintaining the formation.
Furthermore, in the pilot-follow consistency cooperative control method,
the model of the piloted robot is:
Figure BDA0003870242820000021
wherein x 0 (t)∈R 3 ,v 0 (t)∈R 3 Respectively a pose state and a speed state of the piloted robot;
the model of the ith following robot is:
Figure BDA0003870242820000022
wherein x is i (t)∈R 3 ,v i (t)∈R 3 And u i (t)∈R 3 Respectively setting the pose state, the speed state and the control input of the ith following robot; the number of the following robots is N;
control input u i (t):
Figure BDA0003870242820000023
Where α and γ are control parameters, a ij Is the (i, j) th entry, a, of the adjacency matrix ij > 0 is the weight between each robot. Delta of i0 Is the relative position difference, delta, between the ith robot and the piloted robot ij The relative position difference between the ith robot and the jth robot.
Further, the robots in the formation adopt an improved Navigation functional package to perform autonomous Navigation;
the improvement on the Navigation function package comprises the following steps:
1) Adding a launch file comprising a namespace;
adding a < group > tag into a launch file of each robot ROS system; the < group > has an ns attribute, and nodes, topics, parameters and services surrounded by the < group > tag are added with a robot prefix of the < ns >; wherein,
a certain launch file < group > tag of the ith robot is:
Figure BDA0003870242820000031
2) Changing configuration parameters of Navigation phase joint points in the Navigation function package;
and adding a robot prefix to navigation related nodes including a base coordinate system and a radar topic.
Further, in the distributed traversing patrol, each robot performs traversing patrol with an obstacle avoidance function in a patrol area by using a scanning line path of multiple target points.
Further, traversing patrol with an obstacle avoidance function in a patrol area by using a scanning line path of multiple target points comprises the following steps:
1) The positions and the postures of the issued target points are saved in a list in a quaternion mode,
2) In patrol, the Navigation function package continuously feeds back the current pose state of the robot based on an actionlib communication strategy, and monitors the coordinates of the robot in real time;
3) When the distance between the current position of the robot and the current target point position is smaller than a set threshold value, automatically sending a next target point;
4) The robot receives the next target point, stops arriving at the current target point and drives the next target point;
5) And continuously circulating the steps 3) -4) until the last target point is sent, and ending the traversing search task of the robot in a scanning line mode.
The invention also discloses a multi-robot distributed cooperative patrolling device applying the ROS-based multi-robot distributed cooperative patrolling method, which is characterized by comprising a task planning module, a multi-robot navigation module, a coordination control module and a video identification module which are distributed in the formation robot;
the multi-robot navigation module is used for enabling the robot to perform autonomous navigation and comprises two navigation modes, namely a patrol point navigation mode and a target end point navigation mode; in the patrol navigation mode, the robot is controlled to traverse and patrol by scanning a route; under a target end point navigation mode, controlling the robot to approach to a target along an optimal path; the path planning in the two navigation modes is a planning path with an obstacle avoidance function;
the task planning module is used for controlling the automatic switching of the navigation modes in the multi-robot navigation module and realizing the multi-robot cooperative patrol from an initial place to a patrol area;
the coordination control module is used for controlling formation and aggregation of the multiple robots so as to keep formation of the piloting robot and other robots in the running process;
and the video identification module is used for acquiring a video image of the target and identifying the target.
Further, when a task is started, a task planning module, a multi-robot navigation module, a coordination control module and a video identification module of the formation robot are started through a launch file;
when the navigation robot drives to a patrol area, the navigation robot runs the multi-robot navigation module to realize the obstacle avoidance function from the initial position, drives to the patrol area along the optimal path, enters the patrol area and performs traversal search in a scanning line mode;
the task planning module monitors the coordinates of the robot, and when the piloting robot runs to a patrol area, issues a topic of concerned _ pattern _ area _ flag, and simultaneously starts the video identification module;
the coordination control module operates to form and gather along with the robot, and the robot keeps a set formation with the pilot robot and other robots to drive to a patrol area;
the following robot subscribes to a topic of issued by the leading robot, namely issued by the leading robot, and automatically switches from a gathering formation mode to a patrol mode in a multi-robot navigation module; in the patrol mode, the robot performs traversal patrol in a patrol area in a scanning line mode, and a video identification module is started to perform target identification;
in the patrol process, after a video identification module of a certain robot identifies a target, a target _ recommended _ flag topic and target depth information are issued, the robot is automatically switched to a target end point mode in a multi-robot navigation module from a patrol point mode, and the robot drives to a target point along an optimal path with an obstacle avoidance function by taking the depth information of the target as an end point;
when the robot approaches to the target, coordinate transformation monitoring is carried out through the task planning module, the fact that the body reaches the set target enclosing range is detected, the robot is automatically switched to the calling coordination control module from the current target end point mode, the robot is controlled to aggregate in an enclosing formation, and the target is enclosed.
The invention also discloses a multi-robot system which comprises a robot formation formed by a plurality of robots, and the multi-robot distributed cooperative patrol device is arranged in the robot formation.
The invention can realize the following beneficial effects:
the ROS-based multi-robot distributed cooperative patrol method, the ROS-based multi-robot distributed cooperative patrol device and the ROS-based multi-robot distributed cooperative patrol system realize multi-robot cooperative patrol and target capture, make up for the problem that a consistency control algorithm cannot realize a distributed search task and also make up for the defect that Navigation cannot realize cooperative completion of tasks by a plurality of robots.
Drawings
The drawings are only for purposes of illustrating particular embodiments and are not to be construed as limiting the invention, wherein like reference numerals are used to designate like parts throughout.
FIG. 1 is a flow chart of a ROS-based multi-robot distributed cooperative patrolling method in an embodiment of the invention;
FIG. 2 is a schematic diagram illustrating the switching from a multi-cluster to a regional patrol in an embodiment of the present invention;
fig. 3 is an external view of a police robot platform in an embodiment of the invention;
fig. 4 is a schematic diagram of a wireless Mesh ad hoc network mode deployment in an embodiment of the present invention;
fig. 5 is a flowchart of a cooperative patrol method for a multi-robot system according to an embodiment of the present invention.
Detailed Description
The preferred embodiments of the present invention will now be described in detail with reference to the accompanying drawings, which form a part hereof, and which together with the embodiments of the invention serve to explain the principles of the invention.
Example one
An embodiment of the invention discloses a ROS-based multi-robot distributed cooperative patrolling method, which comprises the following steps as shown in figure 1:
s1, forming a robot formation by a plurality of robots, and starting from an initial position to drive to a patrol area;
when the robot drives to the patrol area, the piloting robot drives to the patrol area from the initial position along the planned path and provides position navigation for the following robot; the following robot navigates to a patrol area through the position navigation, and keeps a set formation with the piloting robot in the driving process;
s2, after the robots form a patrol area, performing distributed traversing patrol on the robots in respective patrol subareas;
s3, in the process of traversing patrol, after the robot recognizes a task target, guiding other robots in the formation to drive to the target position together;
the method comprises the following steps that a robot which firstly identifies a task target is set as a formation pilot robot, other robots are changed into following robots, and the pilot robot issues target depth information to the following robots; each following robot travels to a target position along a respective planned path.
Step S4 is also executed when the target enclosing task is executed;
and S4, in the process of driving to the target position, after the formation robot drives to the set target surrounding range, under the position navigation of the pilot robot, the multiple robots surround the target in a surrounding formation.
In the embodiment, the robot control adopts a Navigation function packet of an ROS system to realize autonomous Navigation of the robot; the method mainly selects a path planning function package Move _ base, an SLAM mapping function package LeGO-LOAM and a positioning function package HDL-Location. The LeGO-LOAM is a new framework derived by taking the LOAM as the framework, and the constructed map is more complete. In the embodiment, the robot can carry 16-line laser radar, so that the HDL-location function package is adopted, the UKF is used in the data fusion of the back end, the nonlinear probability distribution can be better fitted, and a reliable relocation function can be provided.
Although the Navigation function package of robot autonomous Navigation is provided for the user in the ROS; however, there is no function package related to multi-robot collaborative navigation in the ROS, and since the ROS cannot simultaneously operate the same node, multiple robots cannot simultaneously start the same navigation function node. In the multi-robot navigation, a plurality of robots need to start the same navigation related nodes such as a LeGO-LOAM, an HDL-location, a Move _ base and the like at the same time, but the ROS cannot simultaneously run the same node and cannot meet the requirement of multi-robot navigation.
In order to solve the problem, in the embodiment, the Navigation function package is improved, so that each robot starts the Navigation function package simultaneously in the formation process to realize the cooperative Navigation of multiple robots to complete a task.
The improvement comprises the steps of adding a name space in the launch file and changing the parameters of the function nodes in the Navigation function package.
Specifically, the improvement on the Navigation function package comprises the following steps:
adding a < group > tag into a launch file of each robot ROS system; the < group > has an ns attribute, and nodes, topics, parameters and services surrounded by < group > tags are added with a prefix of < ns > in front;
wherein,
a certain launch file < group > tag of the ith robot is:
Figure BDA0003870242820000071
2) Changing configuration parameters of Navigation phase nodes in the Navigation function package;
and adding a robot prefix to navigation related nodes including a base coordinate system and a radar topic.
Specifically, in step S1, the piloting robot drives from an initial position to a patrol area along a planned path using an improved Navigation function package, and provides position Navigation for the following robot; the following robot drives to a patrol area through the position navigation, and a set formation is kept with the piloting robot by adopting a piloting-following consistency cooperative control method in the driving process;
more specifically, in the piloting-following consistency cooperative control method,
the model of the piloting robot is as follows:
Figure BDA0003870242820000081
wherein x 0 (t)∈R 3 ,v 0 (t)∈R 3 Respectively a pose state and a speed state of the piloted robot;
the model of the ith following robot is:
Figure BDA0003870242820000082
wherein x i (t)∈R 3 ,v i (t)∈R 3 And u i (t)∈R 3 Respectively as the pose state of the ith following robotSpeed state and control inputs; the number of the following robots is N;
control input u i (t):
Figure BDA0003870242820000083
Where α and γ are control parameters, a ij Is the (i, j) th entry, a, of the adjacency matrix ij > 0 is the weight between the robots. Delta i0 Is the relative position difference, delta, between the ith robot and the piloted robot ij Is the relative position difference between the ith robot and the jth robot.
Specifically, in a specific scheme of this embodiment, a robot formation in which 1 piloting robot guides 3 following robots is taken as an example. In the process of driving to a patrol area, 1 piloting robot guides 3 following robots to keep a triangular formation to drive to a specified patrol area at the same speed; the Navigation robot runs the improved Navigation function package to realize the functions of real-time positioning, navigation, path planning and obstacle avoidance of dynamic and static obstacles, and the following robot adopts a Navigation-following consistency control method to keep the same speed and the expected distance with the Navigation robot and other robots.
Because the Navigation function package in the ROS can only plan one target point, the traversing search of the robot in a patrol area in a scanning line mode cannot be realized. The problem of writing a multi-target point navigation node is solved to realize the function. In the step S2, in the distributed traversing patrol, each robot runs an improved Navigation function packet in a patrol area, and Navigation of multiple target points is realized based on an actionlib communication strategy; namely, in the robot navigation process, actionlib continuously feeds back the current pose state of the robot, and when the robot reaches the terminal point, the robot completes the navigation task and returns the final execution result; therefore, the robot can perform traversing patrol of multiple target points with an obstacle avoidance function by scanning a line path.
Fig. 2 shows a schematic diagram of switching from the multi-machine aggregation in step S1 to the area patrol in step S2.
Specifically, the traversing patrol of multiple target points with an obstacle avoidance function by using a scanning line path comprises the following steps:
1) The positions and the postures of the issued target points are saved in a list in a quaternion mode,
2) In patrol, the Navigation function package continuously feeds back the current pose state of the robot based on an actionlib communication strategy, and monitors the coordinates of the robot in real time;
3) When the distance between the current position of the robot and the current target point position is smaller than a set threshold value, automatically sending a next target point;
4) The robot receives the next target point, stops arriving at the current target point and drives the next target point;
5) And continuously circulating the steps 3) -4) until the last target point is sent, and finishing the traversing search task by the robot in a scanning line mode.
In the step S3, the robot which firstly identifies the task target is set as a piloting robot for formation, the other robots are changed into following robots, and the piloting robot issues target depth information to the following robots; the Navigation robot and each following robot run the improved Navigation functional packet to travel to the target position along the respective planning path.
In the step S4, in the process of driving to the target position, after the formation robot drives to the set target surrounding range, the piloting robot runs to the target position along the planned path by using the improved Navigation function, and provides position Navigation for the following robot; and the following robot approaches to the position of the piloting robot through the position navigation, and a set formation is kept with the piloting robot by adopting a piloting-following consistency cooperative control method.
The set target enclosing range is
Figure BDA0003870242820000091
Wherein x j (j = i, 0) and y j (j = i, 0) is the i-th robot and the piloting robot, respectivelyΔ > 0 is a constant value.
After judging that the robot reaches the piloted robot area according to the rules, the piloted robot and the robots thereof form an expected square enclosing formation according to a piloting-following consistency cooperative control method, the side length of the expected square formation is d, namely the expected distance of each robot is d, the expected distance of each robot and a target is r,
Figure BDA0003870242820000101
in conclusion, the ROS-based multi-robot distributed cooperative patrolling method, device and system in the embodiment realize multi-robot cooperative patrolling and target capture based on the ROS robot operating system, make up for the problem that the consistency control algorithm cannot realize a distributed search task, and also make up for the defect that Navigation cannot realize cooperative completion of tasks by multiple robots.
Example two
The embodiment discloses a multi-robot distributed cooperative patrolling device applying a ROS-based multi-robot distributed cooperative patrolling method in the first embodiment, which comprises a task planning module, a multi-robot navigation module, a coordination control module and a video identification module, wherein the task planning module, the multi-robot navigation module, the coordination control module and the video identification module are distributed in a formation robot;
the multi-robot navigation module is used for enabling the robot to perform autonomous navigation and comprises two navigation modes, namely a patrol point navigation mode and a target end point navigation mode; in the patrol navigation mode, the robot is controlled to traverse and patrol by scanning a route; under a target end point navigation mode, controlling the robot to approach to a target along an optimal path; the path planning in the two navigation modes is a planning path with an obstacle avoidance function;
the task planning module is used for controlling the automatic switching of the navigation modes in the multi-robot navigation module and realizing the multi-robot cooperative patrol from an initial place to a patrol area;
the coordination control module is used for controlling formation and aggregation of the multiple robots so as to keep formation of the piloting robot and other robots in the running process;
and the video identification module is used for acquiring a video image of the target and identifying the target.
Specifically, when the multi-robot distributed cooperative patrol device is adopted to carry out cooperative patrol,
1) When a task is started, a task planning module, a multi-robot navigation module, a coordination control module and a video identification module of the formation robot are started through a launch file;
2) When the navigation robot drives to a patrol area, the navigation robot runs the multi-robot navigation module to realize the obstacle avoidance function from the initial position, drives to the patrol area along the optimal path, enters the patrol area and performs traversal search in a scanning line mode;
3) The task planning module monitors the coordinates of the robot, and when the piloting robot runs to a patrol area, issues a topic of concerned _ pattern _ area _ flag, and simultaneously starts the video identification module;
4) The coordination control module operates to form and gather along with the robot, and the robot keeps a set formation with the pilot robot and other robots to drive to a patrol area;
5) The following robot subscribes to the topic of issued prior _ pattern _ area _ flag of the pilot robot, and the following robot automatically switches from the gathering formation mode to a patrol point mode in a multi-robot navigation module; in the patrol mode, the robot performs traversal patrol in a patrol area in a scanning line mode, and a video identification module is started to perform target identification;
6) In the patrol process, after a video identification module of a certain robot identifies a target, a target _ registered _ flag topic and target depth information are issued, the robot is automatically switched to a target end point mode in a multi-robot navigation module from a patrol point mode, and the robot drives to a target point along an optimal path with an obstacle avoidance function by taking the depth information of the target as an end point;
7) When the robot approaches to the target, coordinate transformation monitoring is carried out through the task planning module, the fact that the body reaches the set target enclosing range is detected, the robot is automatically switched to the calling coordination control module from the current target end point mode, the robot is controlled to aggregate in an enclosing formation, and the target is enclosed.
EXAMPLE III
The embodiment discloses a multi-robot system, which comprises a robot formation formed by a plurality of robots, wherein the robot formation is internally provided with the multi-robot distributed cooperative patrol device in the embodiment two.
In a preferable scheme of the embodiment, patrol of the area and capture of the target are realized by using police robot formation.
In particular, in a multi-robot system,
(1) Police robot platform
The police robot adopts the four-differential chassis, so that the police robot can rotate in situ by 360 degrees and can enter a narrow space for searching, and each wheel is powered by the customized hub motor, so that the police robot can deal with most complex terrains, such as gravel, soil and other concave-convex different road conditions, even can climb stairs with a certain height, and can cover most areas for patrol; an STM32 composite motion control board based on an F407 chip is adopted to carry out four-wheel drive control, and an independently developed Qingyun super computer is carried on, the super computer integrates a Chengteng 310AI processor, an installed operating system is Ubuntu18.04, an ROS version is Kinetic, various complex algorithms are mainly operated, and functions of target identification, image classification, positioning, detection, navigation, cooperative control and the like are realized; 16-line laser radar with Robosense on the uppermost layer; an Intel ZED2i binocular depth camera is carried in front of the robot and used for obtaining depth information and human skeleton data of a scene in real time. The police robot platform is shown in figure 3.
(3) Formation communication module
The information interaction among the multiple robots is deployed in a wireless Mesh ad hoc network mode as shown in figure 4. The communication internal framework uses Socket technology based on TCP/IP protocol.
The Wireless Mesh Networks (WMNs) are a multi-node, centerless, self-organizing and self-healing wireless multi-hop interconnection communication networking mode, any robot terminal in the network can be used as a signal receiving and transmitting device to send and receive signals, and peer-to-peer communication connection with other single or multiple adjacent nodes can be dynamically maintained in any mode.
Considering that physical distances between robots and between the robots and a notebook control terminal may be too far during cooperative patrol and inspection operation, and high-density people flow conditions that may occur in practical application scenes such as airports, railway stations and the like, a traditional Wireless Local Area Network (WLAN) mode has transmission limitations such as insufficient signal strength, poor penetrability, severe frequency band interference and the like. Based on the multi-hop connection characteristics of the WMNs, if any one robot node in the group is disconnected from the central scheduling system in communication, other adjacent nodes are automatically connected with the WMNs within an effective wireless signal coverage range, so that the stability and reliability of data transmission among multiple robots are ensured, and the problems of the traditional WLAN in the environment are perfectly solved.
In addition, the WMNs can realize self-organization and self-healing of network communication without human intervention, and can quickly establish a stable and unobstructed interactive network structure at any time and any place. Under the working scene of multi-robot combined inspection and defense deployment tasks, the network fault of a single robot does not affect the performance of the whole network, and the whole scheme has stronger continuous working capability and high robustness when an emergency event or sudden fault of the robot occurs.
(3) Safety protection module
The safety protection module mainly comprises two sub-modules of communication content safety and data storage safety.
In the aspect of communication content safety, network communication data between the robot and the central dispatching system and between the robot and the robot are guaranteed by using a WP3 encryption protocol. The authentication process of WPA3 comprises five steps of scanning, SAE authentication, association, four-way handshake and DHCP, wherein the new SAE method replaces a PSK method in the traditional wireless security protocol, and synchronous forward encryption authentication is carried out on peer entities, so that the risk that a robot is attacked and cracked by an offline dictionary in the communication process is reduced. Meanwhile, the WP3-Enterprise version provides an encryption technology of 192 bits (128 bits as a default), so that the information security protection capability of the encryption technology is far beyond that of a traditional wireless network, and the content of a key wireless message sent by multiple robots in the operation process is prevented from being cracked by a network listener to the greatest extent.
In the aspect of data storage safety, a hard disk for storing an operating system and all key data in a robot is fully encrypted, and an UKey is used for storing an encryption key. A public key authentication system is used between the robot and the adaptive UKey thereof, and certificate authentication centers (issuers) issue certificates in a unified way when the robot leaves a factory. Compared with the traditional hard disk software encryption scheme, the UKey encryption mode ensures that the identity authentication process is not easily attacked, enhances the security of key management, and ensures that an attacker cannot decipher and acquire data in the hard disk of the robot even if the entity of the robot is stolen by a lawless person. Through Unix Bench test, the encryption authentication scheme only generates slight loss on the overall performance of the robot system, and the amplitude is in a reasonable expected range.
(4) Vision module
The visual detection mainly carries out face recognition and human behavior recognition. In the face recognition, the collected face is uploaded to a background and stored for at least 7 days, meanwhile, the mask is worn for detection, a target detection algorithm-YOLOv 5 based on deep learning is adopted, a ZED2i binocular depth camera is carried in front of the robot and used for capturing real-time visual information, and meanwhile, picture information is transmitted to a YOLOv5 deep learning neural network detection model based on deep learning autonomous training. The person who does not wear the mask is identified by real-time reasoning and combining with a mask wearing recognition algorithm, and a no _ shared _ mask topic of the non-worn mask is published. The voice reminding module subscribes to the message and reminds the user of wearing the mask through voice, and therefore the sanitation and safety of public places are guaranteed.
Human body skeleton data acquired through ZED2i binocular depth information are used for recognizing human body behaviors through a visual attention method, and the human body skeleton data acquired through ZED2 i. The theory of recognizing human behaviors based on skeleton data is researched by a student Johansson in 1973, and the research results prove that a human visual system can sense and predict different motions, so that a large number of students can recognize and research human behaviors through information of joints and positions of human bodies. The invention identifies and deduces human behaviors, such as climbing, crowding, pursuing and other abnormal behaviors, through a visual attention algorithm on the basis of relevant information in key frames extracted from a video sequence based on characteristics of pre-design and weak semantics. The human behavior recognition based on the visual attention method realizes the recognition of the abnormal behavior of the target, effectively monitors the behavior of the pedestrian, and provides effective guarantee for the lives and properties of the personnel in the public places.
(5) Voice reminding module
The system consists of a sound card for collecting sound, a loudspeaker and a voice interaction system. The voice interaction system in the embodiment includes four contents: speech preprocessing, recognition, natural language processing, and speech synthesis. And a convolutional neural network and a deep learning neural network algorithm are adopted to realize voice interaction. And subscribing to a no _ shared _ mask topic published by the visual module, and making voice play by the voice module to remind the user to wear the mask.
(6) Smoke detection module
Advanced smoke and harmful gas sensors are adopted, the sensor signals are detected in real time through an F407 chip, abnormal signals are recognized, and a network is used for giving an alarm to a background.
The cooperative patrolling method of the multi-robot system adopting 4 robots as shown in fig. 5 comprises the following steps:
the method comprises the following steps: and starting the robot communication module node.
Step two: and starting the robot safety precaution module node.
Step two: 4 robot task planning modules, a multi-robot navigation module, a coordination control module, a visual module and a voice reminding module are started through a launch file at the notebook control terminal.
Step three: at an initial starting point, the piloting robot runs in a patrol point mode of a multi-robot navigation module, so that the piloting robot runs along an optimal path from the initial point and runs to a patrol area with an obstacle avoidance function. The task planning module has a function of monitoring the coordinates of the robot, when the piloting robot enters a patrol area to issue an area _ pattern _ area _ flag topic, and after the piloting robot enters the patrol area, the task planning module starts the visual module and the voice module, and the piloting robot traverses and patrols in a scanning line mode.
Step four: at the initial starting point, 3 following robot task planning modules run an assembly formation mode of the coordination control module, and in the mode, the following robots and the pilot robot keep a triangular formation to run towards a patrol area according to a consistency control algorithm.
Step five: when the 3 following robots subscribe to an area _ pattern _ area _ flag topic issued by the piloting robot entering the patrol area, the task planning module automatically switches the current assembly formation mode of the following robots to a patrol mode of the multi-robot navigation module, simultaneously automatically starts the vision module and the voice module, and the following robots and the piloting robot perform distributed traversing patrol on the patrol area in a scanning line mode.
Step six: 4 robots enter the patrol area to automatically start the vision module and the voice module, and carry out mask wearing identification and abnormal behavior identification on personnel in the patrol area according to a deep learning algorithm and a vision attention algorithm, such as abnormal behaviors of climbing, crowding, chasing and the like. The person who does not wear the mask is reminded by voice, and 4 robots for the target with abnormal behaviors surround the person.
Step seven: when one robot identifies a target with abnormal behavior, the robot issues a target _ recognited _ flag topic and depth information of the target, and sets the robot as a piloting robot. And the task planning module stops patrol of the robot in a scanning line mode and automatically switches to a target end point mode of the multi-robot navigation module, and the subscribed target depth information is that a target point runs to the target along a planned optimal path and has an obstacle avoidance function.
Step eight: the task planning module has a monitoring function on the coordinates of the robot, and when the robot runs to an enclosure, namely the distance between the robot and a target point is smaller than the range of 5 meters, the task planning module switches the current target end point mode of the robot to the enclosure formation module in the coordination control module. The 4 robots surround the target in a square formation, and inform the background of the acquired image information through a network.
The above description is only for the preferred embodiment of the present invention, but the scope of the present invention is not limited thereto, and any changes or substitutions that can be easily conceived by those skilled in the art within the technical scope of the present invention are included in the scope of the present invention.

Claims (10)

1. A multi-robot distributed cooperative patrol method based on ROS is characterized by comprising the following steps:
s1, forming a robot formation by a plurality of robots, and starting from an initial position to drive to a patrol area;
when the robot drives to a patrol area, the piloting robot drives to the patrol area from an initial position along a planned path and provides position navigation for the following robot; the following robot navigates to a patrol area through the position navigation, and keeps a set formation with the piloting robot in the driving process;
s2, after the robots form a patrol area, performing distributed traversing patrol on the robots in respective patrol subareas;
s3, in the process of traversing patrol, after the robot recognizes a task target, guiding other robots in the formation to drive to the target position together;
the method comprises the following steps that a robot which identifies a task target firstly is set as a piloting robot for formation, other robots are changed into following robots, and the piloting robot issues target depth information to the following robots; each following robot travels to a target position along a respective planned path.
2. The multi-robot distributed cooperative patrolling method according to claim 1, wherein in the process of driving to the target position, after the formation robot drives to the set target surrounding range, the multi-robot surrounds the target in a surrounding formation under the position navigation of the pilot robot.
3. The multi-robot distributed cooperative patrolling method according to claim 2, wherein a piloting-following consistency cooperative control method is adopted for maintaining the formation during driving to a patrol area or during surrounding a target.
4. The multi-robot distributed cooperative patrol method according to claim 3, wherein in the piloting-following consistency cooperative control method,
the model of the piloted robot is:
Figure FDA0003870242810000011
wherein x is 0 (t)∈R 3 ,v 0 (t)∈R 3 Respectively a pose state and a speed state of the piloted robot;
the model of the ith following robot is:
Figure FDA0003870242810000012
wherein x i (t)∈R 3 ,v i (t)∈R 3 And u i (t)∈R 3 Respectively setting the pose state, the speed state and the control input of the ith following robot; the number of the following robots is N;
control input u i (t):
Figure FDA0003870242810000021
Wherein α and γ are control parameters, a ij Is the (i, j) th entry, a, of the adjacency matrix ij > 0 is the weight between the robots. Delta i0 Is the relative position difference, delta, between the ith robot and the piloted robot ij The relative position difference between the ith robot and the jth robot.
5. The multi-robot distributed cooperative patrol method according to any one of claims 1 to 4, wherein the robots in the formation use a modified Navigation function package for autonomous Navigation;
the improvement on the Navigation function package comprises the following steps:
1) Adding a launch file comprising a namespace;
adding a < group > tag into a launch file of each robot ROS system; the < group > has an ns attribute, and nodes, topics, parameters and services surrounded by < group > tags are added with a robot prefix of < ns > in front; wherein,
a certain launch file < group > tag of the ith robot is:
Figure FDA0003870242810000022
2) Changing configuration parameters of Navigation phase joint points in the Navigation function package;
and adding a robot prefix to navigation related nodes including a base coordinate system and a radar topic.
6. A multi-robot distributed collaborative patrol method according to claim 5,
in distributed traversing patrol, each robot performs traversing patrol with an obstacle avoidance function in a patrol area by using a scanning line path of multiple target points.
7. A multi-robot distributed collaborative patrol method according to claim 5,
the traversing patrol with the obstacle avoidance function in the patrol area by the scanning line path of a plurality of target points comprises the following steps:
1) The positions and the postures of the issued target points are saved in a list in a quaternion mode,
2) In patrol, the Navigation function package continuously feeds back the current pose state of the robot based on an actionlib communication strategy, and monitors the coordinates of the robot in real time;
3) When the distance between the current position of the robot and the current target point position is smaller than a set threshold value, automatically sending a next target point;
4) The robot receives the next target point, stops arriving at the current target point and drives the next target point;
5) And continuously circulating the steps 3) -4) until the last target point is sent, and finishing the traversing search task by the robot in a scanning line mode.
8. A multi-robot distributed cooperative patrolling device applying the ROS-based multi-robot distributed cooperative patrolling method according to claims 1-7, comprising a task planning module, a multi-robot navigation module, a coordination control module and a video recognition module distributed in the formation robot;
the multi-robot navigation module is used for enabling the robot to carry out autonomous navigation and comprises two navigation modes, namely a patrol point navigation mode and a target end point navigation mode; in the patrol mode, the robot is controlled to traverse and patrol by a scanning line path; under a target end point navigation mode, controlling the robot to approach to a target along an optimal path; the path planning in the two navigation modes is a planning path with an obstacle avoidance function;
the task planning module is used for controlling the automatic switching of the navigation modes in the multi-robot navigation module and realizing the multi-robot cooperative patrol from an initial place to a patrol area;
the coordination control module is used for controlling formation and aggregation of the multiple robots so that the piloting robot and other robots can keep formation during running;
and the video identification module is used for acquiring a video image of the target and identifying the target.
9. A multi-robot distributed cooperative patrolling device according to claim 8,
when a task is started, a task planning module, a multi-robot navigation module, a coordination control module and a video identification module of the formation robot are started through a launch file;
when the navigation robot drives to a patrol area, the navigation robot runs the multi-robot navigation module to realize the obstacle avoidance function from the initial position, drives to the patrol area along the optimal path, enters the patrol area and performs traversal search in a scanning line mode;
the task planning module monitors the coordinates of the robot, and when the piloting robot runs to a patrol area, issues a topic of concerned _ pattern _ area _ flag, and simultaneously starts the video identification module;
the robot operation coordination control module is followed to form and gather, and the robot keeps a set formation with the piloting robot and other robots to drive to a patrol area;
the following robot subscribes to a topic of issued by the leading robot, namely issued by the leading robot, and automatically switches from a gathering formation mode to a patrol mode in a multi-robot navigation module; in the patrol mode, the robot performs traversal patrol in a patrol area in a scanning line mode, and a video identification module is started to perform target identification;
in the patrol process, after a video identification module of a certain robot identifies a target, a target _ recommended _ flag topic and target depth information are issued, the robot is automatically switched to a target end point mode in a multi-robot navigation module from a patrol point mode, and the robot drives to a target point along an optimal path with an obstacle avoidance function by taking the depth information of the target as an end point;
when the robot approaches to the target, coordinate transformation monitoring is carried out through the task planning module, the fact that the body reaches the set target enclosing range is detected, the robot is automatically switched to the calling coordination control module from the current target end point mode, the robot is controlled to aggregate in an enclosing formation, and the target is enclosed.
10. A multi-robot system comprising a robot formation composed of a plurality of robots in which the multi-robot distributed cooperative patrol apparatuses according to any one of claims 8 to 9 are provided.
CN202211193074.2A 2022-09-28 2022-09-28 Multi-robot distributed cooperative patrol method, device and system based on ROS Pending CN115562263A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202211193074.2A CN115562263A (en) 2022-09-28 2022-09-28 Multi-robot distributed cooperative patrol method, device and system based on ROS

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202211193074.2A CN115562263A (en) 2022-09-28 2022-09-28 Multi-robot distributed cooperative patrol method, device and system based on ROS

Publications (1)

Publication Number Publication Date
CN115562263A true CN115562263A (en) 2023-01-03

Family

ID=84743202

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202211193074.2A Pending CN115562263A (en) 2022-09-28 2022-09-28 Multi-robot distributed cooperative patrol method, device and system based on ROS

Country Status (1)

Country Link
CN (1) CN115562263A (en)

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20180074520A1 (en) * 2016-09-13 2018-03-15 Arrowonics Technologies Ltd. Formation flight path coordination of unmanned aerial vehicles
CN110940985A (en) * 2019-12-13 2020-03-31 哈尔滨工程大学 Multi-UUV tracking and trapping system and method
CN111190420A (en) * 2020-01-07 2020-05-22 大连理工大学 Cooperative search and capture method for multiple mobile robots in security field
CN112906245A (en) * 2021-03-19 2021-06-04 上海高仙自动化科技发展有限公司 Multi-robot simulation method, system, simulation server and terminal
CN113341956A (en) * 2021-05-20 2021-09-03 西安交通大学 Multi-agent master-slave formation control method based on improved artificial potential field method
CN115016455A (en) * 2022-04-24 2022-09-06 福建(泉州)哈工大工程技术研究院 Robot cluster positioning movement method and system

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20180074520A1 (en) * 2016-09-13 2018-03-15 Arrowonics Technologies Ltd. Formation flight path coordination of unmanned aerial vehicles
CN110940985A (en) * 2019-12-13 2020-03-31 哈尔滨工程大学 Multi-UUV tracking and trapping system and method
CN111190420A (en) * 2020-01-07 2020-05-22 大连理工大学 Cooperative search and capture method for multiple mobile robots in security field
CN112906245A (en) * 2021-03-19 2021-06-04 上海高仙自动化科技发展有限公司 Multi-robot simulation method, system, simulation server and terminal
CN113341956A (en) * 2021-05-20 2021-09-03 西安交通大学 Multi-agent master-slave formation control method based on improved artificial potential field method
CN115016455A (en) * 2022-04-24 2022-09-06 福建(泉州)哈工大工程技术研究院 Robot cluster positioning movement method and system

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
申通 等: "基于事件触发机制的自主车辆队列协同控制", 计算机应用研究, vol. 38, no. 3, 31 March 2021 (2021-03-31), pages 792 - 795 *

Similar Documents

Publication Publication Date Title
Alsamhi et al. Survey on artificial intelligence based techniques for emerging robotic communication
US11473913B2 (en) System and method for service oriented cloud based management of internet of drones
Ning Unit and ubiquitous internet of things
CN107911793B (en) Unmanned aerial vehicle arbitrary figure no-fly zone identification navigation system
Kabir et al. Internet of robotic things for mobile robots: concepts, technologies, challenges, applications, and future directions
Liu et al. DRL-UTPS: DRL-based trajectory planning for unmanned aerial vehicles for data collection in dynamic IoT network
Wan et al. To smart city: Public safety network design for emergency
CN113495578A (en) Digital twin training-based cluster track planning reinforcement learning method
WO2022001120A1 (en) Multi-agent system and control method therefor
CN104038729A (en) Cascade-type multi-camera relay tracing method and system
Tang et al. A joint global and local path planning optimization for UAV task scheduling towards crowd air monitoring
Shahid et al. Path planning in unmanned aerial vehicles: An optimistic overview
Bicocchi et al. Collective awareness for human-ict collaboration in smart cities
CN113422803B (en) Seamless migration method for intelligent unmanned aerial vehicle inspection task based on end edge cloud cooperation
Ortiz et al. Task inference and distributed task management in the Centibots robotic system
Xiao et al. Divide‐and conquer‐based surveillance framework using robots, sensor nodes, and RFID tags
Rana et al. Internet of things and UAV: An interoperability perspective
CN115562263A (en) Multi-robot distributed cooperative patrol method, device and system based on ROS
Zeng et al. Convergence of communications, control, and machine learning for secure and autonomous vehicle navigation
CN118427519A (en) Unmanned aerial vehicle intelligence inspection system based on artificial intelligence
Rahouti et al. A decentralized cooperative navigation approach for visual homing networks
Miao et al. Drone enabled smart air-agent for 6G network
Ji et al. Secure olympics games with technology: Intelligent border surveillance for the 2022 Beijing winter olympics
Agrawal et al. A comparative study of mobility models for flying ad hoc networks
Prabha et al. AIoT Emerging Technologies, Use Cases, and Its Challenges in Implementing Smart Cities

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination