[go: up one dir, main page]
More Web Proxy on the site http://driver.im/

CN113532440A - Rescue robot-based on-site investigation and material supply method, system and equipment - Google Patents

Rescue robot-based on-site investigation and material supply method, system and equipment Download PDF

Info

Publication number
CN113532440A
CN113532440A CN202110876153.2A CN202110876153A CN113532440A CN 113532440 A CN113532440 A CN 113532440A CN 202110876153 A CN202110876153 A CN 202110876153A CN 113532440 A CN113532440 A CN 113532440A
Authority
CN
China
Prior art keywords
rescue
robot
prediction
data
real
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202110876153.2A
Other languages
Chinese (zh)
Other versions
CN113532440B (en
Inventor
黎冠
王迪
刘永涛
卜祥丽
于腾
黎仁士
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
North China Institute of Science and Technology
Original Assignee
North China Institute of Science and Technology
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by North China Institute of Science and Technology filed Critical North China Institute of Science and Technology
Priority to CN202110876153.2A priority Critical patent/CN113532440B/en
Publication of CN113532440A publication Critical patent/CN113532440A/en
Application granted granted Critical
Publication of CN113532440B publication Critical patent/CN113532440B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G01MEASURING; TESTING
    • G01CMEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
    • G01C21/00Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00
    • G01C21/20Instruments for performing navigational calculations
    • GPHYSICS
    • G05CONTROLLING; REGULATING
    • G05DSYSTEMS FOR CONTROLLING OR REGULATING NON-ELECTRIC VARIABLES
    • G05D1/00Control of position, course, altitude or attitude of land, water, air or space vehicles, e.g. using automatic pilots
    • G05D1/02Control of position or course in two dimensions
    • G05D1/021Control of position or course in two dimensions specially adapted to land vehicles
    • G05D1/0212Control of position or course in two dimensions specially adapted to land vehicles with means for defining a desired trajectory
    • GPHYSICS
    • G05CONTROLLING; REGULATING
    • G05DSYSTEMS FOR CONTROLLING OR REGULATING NON-ELECTRIC VARIABLES
    • G05D1/00Control of position, course, altitude or attitude of land, water, air or space vehicles, e.g. using automatic pilots
    • G05D1/02Control of position or course in two dimensions
    • G05D1/021Control of position or course in two dimensions specially adapted to land vehicles
    • G05D1/0231Control of position or course in two dimensions specially adapted to land vehicles using optical position detecting means

Landscapes

  • Engineering & Computer Science (AREA)
  • Radar, Positioning & Navigation (AREA)
  • Remote Sensing (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Automation & Control Theory (AREA)
  • Aviation & Aerospace Engineering (AREA)
  • Electromagnetism (AREA)
  • Control Of Position, Course, Altitude, Or Attitude Of Moving Bodies (AREA)

Abstract

本发明公开了一种基于救援机器人的现场勘查和物资供应方法、系统和设备,方法包括:基于先验地图获取进入救援地点的规划路径,并基于所规划的路径向底盘控制系统发送运行指令,以便所述救援机器人根据所述运行指令按规划得到的路径进入事故现场;获取事故现场的环境数据和实时画面数据,利用多分类器融合算法实时识别灾害现场环境数据,并将环境数据和实时画面数据传输至指挥中心;接收指挥中心根据实时画面数据制定的救援策略,并基于救援策略完成现场勘查和物资供应。从而解决了现有技术中救援机器人现场勘查和物资供应效率较低的技术问题。

Figure 202110876153

The invention discloses a method, system and equipment for on-site investigation and material supply based on a rescue robot. The method includes: obtaining a planned path for entering a rescue site based on a prior map, and sending an operation instruction to a chassis control system based on the planned path; So that the rescue robot can enter the accident scene according to the planned path according to the operation instruction; obtain the environmental data and real-time picture data of the accident scene, use the multi-classifier fusion algorithm to identify the disaster scene environmental data in real time, and combine the environmental data with the real-time picture data. The data is transmitted to the command center; the rescue strategy formulated by the command center based on the real-time screen data is received, and the on-site investigation and material supply are completed based on the rescue strategy. Thus, the technical problem of low efficiency of rescue robot on-site investigation and material supply in the prior art is solved.

Figure 202110876153

Description

Rescue robot-based on-site investigation and material supply method, system and equipment
Technical Field
The invention relates to the technical field of emergency rescue equipment, in particular to a rescue robot-based on-site investigation and material supply method, system and equipment.
Background
Natural disasters and artificial accidents generally have the characteristics of low predictability and high destructive power, more uncertain factors of the accident site are caused, secondary disasters such as explosion, collapse and the like can occur, the risk of performing site rescue is high, and the accident that rescuers are in distress sometimes occurs.
In recent years, the occurrence of serious accidents shows that casualties caused by complicated and variable environments of accident sites become one of the main factors, for example, in the case of accidents such as fire disasters and dangerous chemical substance explosions, the site environment needs to be known quickly to eliminate serious hidden dangers. In addition, during the rescue process, insufficient supplies of rescue equipment, materials, medicines and the like occur, and the transportation and storage of the rescue equipment and medicines also take much labor, which affects the rescue efficiency.
In recent years, robots related to rescue scene investigation and rescue material supply mainly use an artificial remote control mode, a command center remotely controls a rescue robot to enter a scene, and the robot does not have the capacity of path planning and autonomous navigation. In the aspect of site investigation, the command center knows the conditions of the accident site through real-time pictures returned by the robot, and combines various sensors such as a thermal imager, a temperature sensor and the like to know the specific conditions of the accident site, so that the command center has higher requirements on operators.
Therefore, in the aspect of rescue site exploration and rescue material supply, the prior art disadvantages mainly include the following aspects:
firstly, most of the existing rescue robots do not have autonomous navigation capability, and the existing rescue robots mainly utilize a remote control mode to remotely control the robots to carry out on-site investigation through returned real-time pictures by operators. In addition, the existing rescue robot has no capability of sensing and identifying the environment, mainly depends on the situation of the judgment site of an operator, may have certain human errors, and has higher requirements on the operator. Secondly, the robot has no prejudgment capability on possible dangers in the field in the operation process, and is remotely controlled by an operator, so that irreparable loss is easily caused; secondly, in the aspect of material supply, the prior art mainly adopts a manual transportation mode, and some medicines, flammable and explosive articles and the like are stored by using a professional storage cabinet; when rescue goods and materials are lacked, rescue workers need to be dispatched to a rescue place, and certain potential safety hazards exist in the process of dispatching the rescue workers for accidents such as fire, explosion and the like.
Disclosure of Invention
The object of the present invention is to provide a rescue robot to at least partially solve the technical problems of the prior art. The purpose is realized by the following technical scheme:
the invention provides a rescue robot-based on-site exploration, safety control and material supply method, which comprises the following steps:
acquiring a planned path entering a rescue place based on a prior map, and sending an operation instruction to a chassis control system based on the planned path so that the rescue robot enters an accident scene according to the planned path according to the operation instruction;
and acquiring environmental data and real-time picture data of an accident site, and operating a multi-mode prediction algorithm by using a site edge computing unit to complete site environment investigation. And then, according to the identification result of the dangerous objects and the depth information of the visual sensor, dangerous index evaluation of the dangerous objects is carried out, and the dangerous indexes are displayed in a real-time picture. Transmitting the environment data and the real-time picture data to a command center;
the command center formulates a rescue strategy according to the real-time image data, and finishes the moving target and the material supply of the robot based on the rescue strategy;
the field robot automatically navigates to a target position to realize material supply according to a rescue task and a path planning algorithm, adopts different traveling speeds according to different danger indexes, and can re-plan a path when high danger exists;
further, the acquiring a planned path entering a rescue place based on a prior map and sending an operation instruction to a chassis control system based on the planned path so that the rescue robot enters an accident scene according to the planned path according to the operation instruction specifically includes:
establishing an optimal collision-free path from a starting point to a target point according to a prior map;
sending an operation instruction to a chassis control system based on the optimal collision-free path so that the rescue robot can enter an accident scene according to the operation instruction and the optimal collision-free path;
meanwhile, in the process of advancing to the accident site, dynamic obstacles and the like in the optimal collision-free path are avoided in real time.
Further, the establishing an optimal collision-free path from a starting point to a target point according to the prior map specifically includes:
in a pre-established three-dimensional prior map, marking a plurality of positions between a starting point and a target position where an accident scene is located at random according to actual rescue needs, and acquiring three-dimensional coordinate information corresponding to each position;
sequencing the obtained three-dimensional coordinates through the sequence numbers, and determining other sequence numbers connected with the sequence numbers in each sequence number;
and selecting the path with the shortest distance from all the listed paths meeting the conditions as the optimal collision-free path.
Further, the selecting a path with the shortest distance from all listed paths meeting the conditions as the optimal collision-free path specifically includes:
searching all paths between the starting point and the end point by using a global search algorithm;
removing the large number of steps;
and calculating the path length of each stage, and selecting the shortest path as the optimal collision-free path after accumulation.
Further, the receiving command center makes a rescue strategy according to the real-time image data, and completes on-site exploration and material supply based on the rescue strategy, and the receiving command center specifically comprises:
adding a control instruction containing the rescue strategy in a source code file of the voice instruction;
and receiving the voice command, and finishing on-site exploration, material inquiry, map information inquiry and rescue material taking and supplying based on a rescue strategy in the voice command.
Further, the acquiring environmental data and real-time image data of the accident scene further includes:
and carrying out example segmentation on the acquired real-time picture data such as personnel, dangerous objects and the like.
Further, the example segmentation of the acquired real-time image data such as personnel and dangerous objects specifically includes:
setting the acquired real-time picture data as an RGB image T, and performing depth mapping S on the image T;
constructing a feature matrix R (T, S) of each RoI region:
R(T,S)=[a(t)*b(s)]
wherein a (t), b(s) respectively represent RGB and depth data;
setting a gate function G by the following formula1
Figure BDA0003190389350000041
In the formula (I), the compound is shown in the specification,
Figure BDA0003190389350000042
is a weight parameter;
setting a gate function G2Carrying out backup on the error area discarded by the error;
constructing a feature map by providing an original feature map using the following formula
Figure BDA0003190389350000043
Figure BDA0003190389350000044
In the formula,
Figure BDA0003190389350000045
In order to be a weight parameter, the weight parameter,
Figure BDA0003190389350000046
is the same as depth map b(s);
the data information to be retained is filtered by the following formula:
Figure BDA0003190389350000047
bonding of
Figure BDA0003190389350000048
And
Figure BDA0003190389350000049
obtaining an output result P(i)
The segmented image data is used to identify people, dangerous objects and the like. A multi-classifier fusion prediction method is designed and utilized, and a multi-mode prediction system comprises two steps: first, the same data set D is usedtrainTraining is performed to obtain model parameters for each prediction model. Next, each prediction method will use the parameters found in the first step to target data set D to be predictedPredict-setMaking predictions, in which step, for each trajectory in D and each prediction time range of interest Δ T, each predictor is used to generate an estimate of the future position for
Figure BDA00031903893500000410
And (4) showing. The prediction results for each method are then used to calculate weights for each prediction value, which in turn are used to define rules for multiple prediction systems to select which prediction methods should be used.
For a single classifier module, the training is first of all for supervised learning to train each prediction method of the multi-modal prediction systemAnd (4) model parameters. Assuming that there are n prediction methods available, for each method i 1iLet us order
Figure BDA00031903893500000411
miIs the number of adjustable parameters of method i. Limiting the possible values of the independent parameters to a given discrete set, the goal being for each
Figure BDA00031903893500000412
A set of allocation parameters is identified to minimize the training error for each prediction method.
As the accuracy of the prediction period may vary significantly as the prediction time horizon varies. Consider a set of discrete prediction time ranges, of length k, defined as H. We denote each time range value as hPWherein P ═ 1.. k. The aim is therefore to find the most suitable parameter value assignment Z for each time range in Hi. If the total training error of the method i is given by the time range hPThen parameter assignment ZiAnd input data D is defined as EiThe assignment of parameters to a given discrete set can achieve the highest performance of the method and time horizon, defined as follows:
Figure BDA0003190389350000051
thus, the training process will yield k × n set assignments
Figure BDA0003190389350000052
Defining D as a single track contained in D,
Figure BDA0003190389350000053
is a set of time steps of the trajectory d for which a prediction can be generated by a predictor i, training an error EiDefined as the average of these prediction errors:
Figure BDA0003190389350000054
wherein
Figure BDA0003190389350000055
Is a prediction of predictor i, which is a parameter value ZiAs a function of (c).
For the output results of multiple classifiers, D-S evidence theory, let Θ be { θ ═ θ1,θ2,…,θnDenotes all possible sets in the rescue scene environment, θiIs the conclusion the system draws. The basic probability distribution function, the confidence function and the likelihood function in the D-S evidence theory are respectively defined as follows:
Figure BDA0003190389350000056
Figure BDA0003190389350000057
Figure BDA0003190389350000058
in equation (3), A is a hypothesis in the recognition framework,
Figure BDA0003190389350000059
m (A) is a basic probability distribution function. In equation (4), Bel (A) is the sum of the probabilities of the primary distributions of all subsets A, and Pl (A) is the sum of the probabilities of the primary distributions of all intersecting subsets. Since BELs are independent of the unified recognition framework, they can be grouped into a subset 2Θ→[0,1]Any collisions can be quantified using the combination rule of Dempster. For all
Figure BDA0003190389350000061
And given n probability distribution functions m1,m2,...mnThe Dempster rule isEquations (6) and (7) calculate:
Figure BDA0003190389350000062
Figure BDA0003190389350000063
where-K represents the conflict measure of the belief function.
And fusing the classification algorithms, and training a plurality of classifiers to obtain a classifier model. And then, performing fusion operation on the results of the plurality of classifiers by using a Dempster combination rule on the similar articles by using an information fusion technology based on a D-S evidence theory, and selecting the highest accuracy from the calculated results as the fused target information to output an identification result.
The invention also provides a rescue robot-based site investigation and material supply system for implementing the method as described above, the system comprising:
the path planning unit is used for acquiring a planned path entering a rescue place based on a prior map and sending an operation instruction to the chassis control system based on the planned path so that the rescue robot can enter an accident scene according to the planned path according to the operation instruction;
the rescue data acquisition unit is used for acquiring environmental data and real-time image data of an accident scene and transmitting the environmental data and the real-time image data to the command center;
and the rescue instruction receiving unit is used for receiving a rescue strategy formulated by the command center according to the real-time image data and completing field investigation and material supply based on the rescue strategy.
The invention also provides a terminal device, the device comprises: the system comprises a data acquisition unit, a processor and a memory;
the data acquisition unit is used for acquiring data; the memory is to store one or more program instructions; the processor is configured to execute one or more program instructions to perform the method as described above.
The present invention also provides a computer readable storage medium having embodied therein one or more program instructions for executing the method as described above.
According to the rescue robot-based site investigation and material supply method provided by the invention, a planned path entering a rescue place is obtained based on a prior map, and an operation instruction is sent to a chassis control system based on the planned path, so that the rescue robot enters an accident site according to the planned path according to the operation instruction; acquiring environmental data and real-time picture data of an accident scene, and transmitting the environmental data and the real-time picture data to a command center; and receiving a rescue strategy formulated by the command center according to the real-time image data, and completing field investigation and material supply based on the rescue strategy. The rescue robot realizes fixed-point transportation when the supply of rescue instruments, goods and medicines is insufficient in the rescue process; meanwhile, the rescue robot serving as special rescue equipment can reach a plurality of designated positions independently to provide corresponding support and cooperate with rescuers to complete rescue tasks. As a professional rescue goods supply robot, for goods needing special preservation, the robot provides conditions of constant temperature, refrigeration, sealing and the like, and the related goods are guaranteed to be lost. And rescue personnel accessible voice operation robot if need appoint article, if assign the instruction after, the matter storage lattice just can be opened automatically, guarantees taking immediately of article. Therefore, the technical problems of on-site exploration and low material supply efficiency of the rescue robot in the prior art are solved.
Drawings
Various other advantages and benefits will become apparent to those of ordinary skill in the art upon reading the following detailed description of the preferred embodiments. The drawings are only for purposes of illustrating the preferred embodiments and are not to be construed as limiting the invention. Also, like parts are designated by like reference numerals throughout the drawings. In the drawings:
fig. 1 is a flowchart of a rescue robot-based site survey and material supply method according to an embodiment of the present invention;
FIG. 2 is a flow chart of the method of FIG. 1 in a use scenario;
FIG. 3 is a block diagram of an embodiment of the rescue robot-based site survey and material supply system according to the present invention;
FIG. 4 is an architectural diagram of the system shown in FIG. 3;
FIG. 5 is a flow chart of a method for fusing multiple classifiers based on D-S evidence theory.
Detailed Description
Exemplary embodiments of the present disclosure will be described in more detail below with reference to the accompanying drawings. While exemplary embodiments of the present disclosure are shown in the drawings, it should be understood that the present disclosure may be embodied in various forms and should not be limited to the embodiments set forth herein. Rather, these embodiments are provided so that this disclosure will be thorough and complete, and will fully convey the scope of the disclosure to those skilled in the art.
The rescue robot-based on-site investigation and material supply method provided by the invention aims at the problems of disordered on-site, difficulty in investigation, difficulty in material supply and the like, and solves the problems of difficulty and untimely rescue on-site investigation and material supply in accident rescue by combining knowledge in the fields of artificial intelligence, synchronous positioning and mapping technology, voice recognition, mechanical engineering and the like.
The method is based on a rescue robot, and the robot mainly comprises a 3D laser sensor, an RGB-D image sensor, a nine-axis attitude sensor, an ultrasonic sensor, a thermal imager, a field controller, a touch screen, an intelligent storage cabinet, a chassis, an aluminum alloy shell and the like on hardware. In the aspect of structural design, a 3D laser sensor is carried at the top of the robot and used for collecting point cloud information in the surrounding environment in real time. And an RGB-D image sensor and a thermal imager are respectively carried at the front position and the rear position, and the RGB-D image sensor can be used for acquiring depth visual information and real-time picture information in the environment. The thermal imager is used for finding potential overheating hazards of trapped personnel, equipment and the like in an accident scene, and the command center can make an effective rescue scheme according to the collected related data by integrating the information.
In one embodiment, the present invention provides a rescue robot-based site exploration and material supply method, as shown in fig. 1, the method comprising the steps of:
s1: and acquiring a planned path entering a rescue place based on a prior map, and sending an operation instruction to a chassis control system based on the planned path so that the rescue robot enters an accident scene according to the planned path according to the operation instruction.
Specifically, in step S1, when planning a path through a prior map, an optimal collision-free path from a starting point to a target point is established according to the prior map; and then, sending an operation instruction to a chassis control system based on the optimal collision-free path so that the rescue robot can enter an accident scene according to the operation instruction and the optimal collision-free path. Meanwhile, in the process of advancing to the accident site, dynamic obstacles and the like in the optimal collision-free path are avoided in real time.
The establishing of an optimal collision-free path from a starting point to a target point according to the prior map specifically includes:
in a pre-established three-dimensional prior map, randomly marking a plurality of positions between a starting point and a target position where an accident site is located, and acquiring three-dimensional coordinate information corresponding to each position;
sequencing the obtained three-dimensional coordinates through the sequence numbers, and determining other sequence numbers connected with the sequence numbers in each sequence number;
and selecting the path with the shortest distance from all the listed paths meeting the conditions as the optimal collision-free path. Specifically, searching all paths between a starting point and an end point by using a global search algorithm; removing the large number of steps; and calculating the path length of each stage, and selecting the shortest path as the optimal collision-free path after accumulation.
That is to say, in the route planning and navigation system, firstly, the global route planning is to establish an optimal collision-free route from a starting point to a target point according to a prior map, and then, in the aspect of local route planning, the dynamic obstacles and the like in the route are avoided in real time. At present, the more used global path planning algorithms include A-star and Dijkstra algorithms, and the local path planning algorithms include TEB, DWA algorithms and the like. On the basis of utilizing Dijkstra algorithm and DWA algorithm, the invention aims at the problem that the prior autonomous mobile robot can only realize autonomous navigation of two target points, and realizes a method for autonomous navigation of multiple target points through improvement of a path planning algorithm. The algorithm may set the end point of the robot and a number of other target points to be passed by before reaching the end point. After algorithm optimization, the shortest time is used when the robot reaches the end point after passing through all target points.
The specific operation process can be divided into the following steps: in the established three-dimensional map, a plurality of positions are randomly marked according to specific conditions, and the positions can be rescue positions, intersections, corners and the like in the map and are uniformly distributed as much as possible. Because each position has corresponding three-dimensional coordinate information, the target positions can be sorted by adding sequence numbers according to the three-dimensional coordinates. Then, the serial number connected to each serial number is determined. Before the navigation function is started, the terminal and the rescue position to which the robot needs to go on the way are determined, and then the algorithm can automatically list all paths meeting the conditions and select a path with the shortest distance.
The specific implementation of the algorithm mainly comprises the following steps: and in the path planning stage, searching all paths between the starting point and the end point by using a global search algorithm, removing the paths with a large number of steps, calculating the path length of each stage, and selecting the shortest path as a final result after accumulation. In the navigation stage, the algorithm of the mobile _ base function package in the ROS is improved, a python type node is established, and a callback mechanism in a subscriber is utilized, so that a callback function transmits a pointer pointing to an action message. After receiving the message published by the topics cmd _ vel, move _ base/global _ plan, move _ base/good and odom _ rf2o, the callback function is called. At this time, the robot sequentially passes through each sequence number area according to the planned path. After acquiring Twist messages issued by cmd _ vel topics, the Base Controller node further controls the lower computer to drive the robot to move. If a plurality of positions need to be surveyed or rescue goods and materials are provided for a plurality of target positions, the task time can be effectively saved, and the rescue efficiency is improved. When the reconnaissance task is executed, in order to increase the task execution efficiency, the robot plans a path from the current position to a target point of the next sequence number area when the robot reaches 10m of the next sequence number area, so that the robot keeps uniform speed running among the sequence number areas, and the rescue efficiency is ensured.
S2: acquiring environmental data and real-time picture data of an accident scene, and transmitting the environmental data and the real-time picture data to a command center; specifically, a control instruction containing the rescue strategy is added to a source code file of a voice instruction; and receiving the voice command, and finishing on-site exploration, material inquiry, map information inquiry and rescue material taking and supplying based on a rescue strategy in the voice command.
In the actual use process, the robot action is controlled through voice recognition, for example, a voice SDK packet of science university news can be correspondingly modified, and a corresponding control instruction is added into a source code file according to the actual use condition. Through using the voice function, the rescue personnel can directly control the robot, inquire goods and materials, map information, take rescue goods and materials in the storage grid and the like.
S3: and receiving a rescue strategy formulated by the command center according to the real-time image data, and completing field investigation and material supply based on the rescue strategy.
In order to improve the picture quality, the acquiring environmental data and real-time picture data of the accident scene further includes:
and carrying out example segmentation on the acquired real-time picture data such as personnel, dangerous objects and the like.
The example segmentation of personnel, dangerous objects and the like is carried out on the acquired real-time image data, and the method specifically comprises the following steps:
setting the acquired real-time picture data as an RGB image T, and performing depth mapping S on the image T;
constructing a feature matrix R (T, S) of each Roi area:
R(T,S)=[a(t)*b(s)]
wherein a (t), b(s) respectively represent RGB and depth data;
setting a gate function G by the following formula1
Figure BDA0003190389350000101
In the formula (I), the compound is shown in the specification,
Figure BDA0003190389350000102
is a weight parameter;
setting a gate function G2Carrying out backup on the error area discarded by the error;
constructing a feature map by providing an original feature map using the following formula
Figure BDA0003190389350000111
Figure BDA0003190389350000112
In the formula (I), the compound is shown in the specification,
Figure BDA0003190389350000113
in order to be a weight parameter, the weight parameter,
Figure BDA0003190389350000114
is the same as depth map b(s);
the data information to be retained is filtered by the following formula:
Figure BDA0003190389350000115
bonding of
Figure BDA0003190389350000116
And
Figure BDA0003190389350000117
obtaining an output result P(i)
That is, in object recognition and segmentation, the present invention performs example segmentation of people, dangerous objects, and the like using RGBD image data acquired by a depth vision sensor. The application effect of the method is better indoors, but in special scenes such as accident scenes, the phenomenon that the depth image is fuzzy occurs, the difference between two adjacent example objects is still small, and even after depth normalization is performed, the example segmentation effect is poor. Therefore, in summary of the above problems, we use a residual compensation mechanism and merge into the current more general example segmentation framework Mask RCNN in an end-to-end manner.
Specifically, the RGB image T and the corresponding depth map S are input in the structure, and the feature matrix R (T, S) of each RoI region is first constructed.
R(T,S)=[a(t)*b(s)]
In the formula, a (t), b(s) respectively represent RGB and depth data. To enhance the instantiation effect with the depth data. Since the depth vision sensor has a certain range when processing depth data, the original depth information may contain noise, which will affect the final prediction effect, and a filtering process is required to optimize the data information in the depth map. Thus, a gate function G is set1And the method is used for extracting judgment information of instance boundaries and the like in the depth map and suppressing noise areas.
Figure BDA0003190389350000118
In the formula (I), the compound is shown in the specification,
Figure BDA0003190389350000119
is a weight parameter. However, during this filtration process, there may be some G-quilt1The important areas discarded erroneously, so that some are G's using a compensation mechanism1The erroneously discarded critical area will have the opportunity to be backed up in the next cell, setting the gate function G2Backup of error regions discarded by errors, 1-G1It is used to screen important regions therein. G2And G1Similarly, a feature map is constructed by providing an original feature map, with the difference in the associated weight parameters
Figure BDA0003190389350000121
To achieve information integrity.
Figure BDA0003190389350000122
In the formula (I), the compound is shown in the specification,
Figure BDA0003190389350000123
in order to be a weight parameter, the weight parameter,
Figure BDA0003190389350000124
is the same as depth map b(s). The data information to be retained is then filtered.
Figure BDA0003190389350000125
Finally, combining
Figure BDA0003190389350000126
And
Figure BDA0003190389350000127
obtaining an output result P(i)
The segmented image data is used to identify people, dangerous objects and the like. A multi-classifier fusion prediction method is designed and utilized, and a multi-mode prediction system comprises two steps: first, the same data set D is usedtrainTraining is performed to obtain model parameters for each prediction model. Next, each prediction method will use the parameters found in the first step to target data set D to be predictedPredict-setA prediction is made, in this step, for each trajectory in D and for each prediction time range of interest deltat,each predictor is arranged to generate an estimate of a future position
Figure BDA0003190389350000128
And (4) showing. The prediction results for each method are then used to calculate weights for each prediction value, which in turn are used to define rules for multiple prediction systems to select which prediction methods should be used.
For a single classifier module, its training is first supervised learning to train the model parameters for each prediction method of the multi-modal prediction system. Assuming that there are n prediction methods available, for each method i 1iLet us order
Figure BDA0003190389350000129
miIs the number of adjustable parameters of method i. Limiting the possible values of the independent parameters to a given discrete set, the goal being for each
Figure BDA00031903893500001210
A set of allocation parameters is identified to minimize the training error for each prediction method.
As the accuracy of the prediction period may vary significantly as the prediction time horizon varies. Consider a set of discrete prediction time ranges, of length k, defined as H. We denote each time range value as hPWherein P ═ 1.. k. The aim is therefore to find the most suitable parameter value assignment Z for each time range in Hi. If the total training error of the method i is given by the time range hPThen parameter assignment ZiAnd input data D is defined as EiThe assignment of parameters to a given discrete set can achieve the highest performance of the method and time horizon, defined as follows:
Figure BDA0003190389350000131
thus, the training process will yield k × n set assignments
Figure BDA0003190389350000132
Defining D as a single track contained in D,
Figure BDA0003190389350000133
is a set of time steps of the trajectory d for which a prediction can be generated by a predictor i, training an error EiDefined as the average of these prediction errors:
Figure BDA0003190389350000134
wherein
Figure BDA0003190389350000135
Is a prediction of predictor i, which is a parameter value ZiAs a function of (c).
For the output results of multiple classifiers, D-S evidence theory, let Θ be { θ ═ θ1,θ2,…,θnDenotes all possible sets in the rescue scene environment, θiIs the conclusion the system draws. The basic probability distribution function, the confidence function and the likelihood function in the D-S evidence theory are respectively defined as follows:
Figure BDA0003190389350000136
Figure BDA0003190389350000137
Figure BDA0003190389350000138
in equation (3), A is a hypothesis in the recognition framework,
Figure BDA0003190389350000139
m (A) is a basic probability distribution function. In the formula (4), Bel(A) Is the sum of the probabilities of the primary distributions of all subsets a, and pl (a) is the sum of the probabilities of the primary distributions of all intersecting subsets. Since BELs are independent of the unified recognition framework, they can be grouped into a subset 2Θ→[0,1]Any collisions can be quantified using the combination rule of Dempster. For all
Figure BDA00031903893500001310
And given n probability distribution functions m1,m2,...mnDempster's rule is calculated using equations (6) and (7):
Figure BDA0003190389350000141
Figure BDA0003190389350000142
where-K represents the conflict measure of the belief function.
And fusing the classification algorithms, and training a plurality of classifiers to obtain a classifier model. And then, performing fusion operation on the results of the plurality of classifiers by using a Dempster combination rule on the similar articles by using an information fusion technology based on a D-S evidence theory, and selecting the highest accuracy from the calculated results as the fused target information to output an identification result.
Next, the description of the overall scheme is performed in combination with the actual rescue scene, as shown in fig. 2, when an accident such as a fire or an explosion occurs, before rescue is performed, the first thing to do is to survey the internal environment of the accident site, detect a hazard, search and rescue trapped people, and the like, so that rescue workers can be reasonably arranged to enter, and rescue the trapped people to safely evacuate. Before rescue workers enter, the robot first replaces the rescue workers to preliminarily know the accident situation, the command center remotely controls the robot to enter the scene, and a three-dimensional map of the accident scene is established. And uploading the three-dimensional map to a command center, and the command center can roughly master the overall condition of the accident through the three-dimensional map.
Meanwhile, the RGB-D image sensors at the front end and the rear end of the robot collect real-time picture information on site. And (3) carrying out example segmentation on the real-time scene by using a deep learning algorithm, and identifying buildings, vehicles, personnel, flames, various devices and the like on the scene. And dangerous marks are carried out on inflammable, explosive and other dangerous objects, and the danger level is judged. The thermal imaging camera can display the thermal imaging picture on site in real time. By integrating the three-dimensional map information, the command center can specifically master the overall situation of the accident site.
In the process, the robot can effectively find the trapped person. When the trapped person is detected in the real-time picture, the command center can immediately arrange the rescue personnel for rescue. The shielded trapped people can be found in time through the thermal imaging graph, and in addition, some equipment which possibly has overheating hidden dangers can also be found and removed in time.
By utilizing the established three-dimensional map, the robot can realize autonomous navigation according to the map,
in the aspect of dangerous object danger index evaluation, a danger index evaluation method based on dangerous object types and distances is designed and utilized, and the danger index is evaluated according to the identification result of dangerous objects and the depth information of the visual sensor. Different hazardous materials may exhibit different hazard indices while maintaining the same distance from the robot. Suppose that the coordinate position of The Central Point (TCP) of the robot in the global coordinate system is as
Figure BDA0003190389350000151
Figure BDA0003190389350000152
Velocity vtcpBraking time of Tb. If the coordinate value of the dangerous object is
Figure BDA0003190389350000153
Figure BDA0003190389350000154
Corresponding to velocity vdangerThen danger thresholdThe value may be defined as equation (8), which is also the minimum distance between the robot and the hazard.
Figure BDA0003190389350000155
To ensure safety, the robot should satisfy equation (9) in the next time step.
|Ho-Rt|>Dt-d (9)
Equation (9) defines the dangerous area in the rescue area. When no dangerous objects exist in the dangerous zone, the robot can operate the maximum power. It is clear that Dt-dVarying with the speed of the robot. Defining the position of the robot in the global map as RmVelocity vmIn conjunction with the definition of the hazard threshold, the hazard threshold calculation can be performed by equation (10):
Figure BDA0003190389350000156
the danger index of the dangerous goods in the rescue area is calculated, and can be expressed as formula (11) according to the definition.
Figure BDA0003190389350000157
According to the danger index, the running speed v of the robot is measuredcThe control of (c) may be expressed as:
Figure BDA0003190389350000158
wherein, f(s)danger-sslow) Representing the relationship between the risk index and the robot speed in equations (8), (9) and (10).
In the process of rescuing the rescuers in real time, if an emergency situation occurs, rescue goods and materials need to be arranged immediately, at the moment, the robot can automatically go to a target area only by determining the positions of the rescuers in the three-dimensional map through the command center. When meeting the barrier in the period, the robot can avoid autonomously, if the problem of road blockage occurs, the robot can plan the path again, and the rescue task is guaranteed to be completed smoothly. In special cases, the command center can arrange the robot to go to a plurality of target areas at one time, and by utilizing a multipoint positioning navigation algorithm, the shortest path and the minimum time are ensured when the robot reaches the target areas.
After the robot arrives at the target area, the rescue personnel can talk with the robot through voice, and after the goods and materials demand is provided, the storage case can be automatically opened for the rescue personnel to take rapidly, and the specific goods and materials information can be displayed on the touch screen. In addition, the rescue personnel can check the position of the rescue personnel through the touch screen, and when the rescue personnel want to go to the next rescue place or leave the scene, the robot can be operated to assist the rescue personnel to quickly go to the next rescue place or leave the scene. If the rescuer is not aware of the particular mode of operation. The command center can remotely operate the robot to help the rescue workers to complete related operations.
After the task is finished, the robot can automatically return to a command center or execute a related patrol task according to specific conditions. When the patrol task is executed, the command center marks a plurality of target areas through which the robot passes in the three-dimensional map, and then the robot can autonomously plan an optimal path through all the target areas. In the patrol process, the command center can observe various things in the accident site in real time by using the object recognition function of the robot, and for undiscovered trapped persons, dangerous articles and the like, the specific conditions are fed back to the command center in time, and the command center can take action immediately according to the feedback information.
In the above specific embodiment, the rescue robot-based site exploration and material supply method provided by the invention obtains a planned path to a rescue site based on a prior map, and sends an operation instruction to a chassis control system based on the planned path, so that the rescue robot enters an accident site according to the planned path according to the operation instruction; acquiring environmental data and real-time picture data of an accident scene, and transmitting the environmental data and the real-time picture data to a command center; and receiving a rescue strategy formulated by the command center according to the real-time image data, and completing field investigation and material supply based on the rescue strategy. The rescue robot realizes fixed-point transportation when the supply of rescue instruments, goods and medicines is insufficient in the rescue process; meanwhile, the rescue robot serving as special rescue equipment can reach a plurality of designated positions independently to provide corresponding support and cooperate with rescuers to complete rescue tasks. As a professional rescue goods supply robot, for goods needing special preservation, the robot provides conditions of constant temperature, refrigeration, sealing and the like, and the related goods are guaranteed to be lost. And rescue personnel accessible voice operation robot if need appoint article, if assign the instruction after, the matter storage lattice just can be opened automatically, guarantees taking immediately of article. Therefore, the technical problems of on-site exploration and low material supply efficiency of the rescue robot in the prior art are solved.
In addition to the above method, the present invention also provides a rescue robot-based site exploration and material supply system for implementing the above method, which, in one embodiment, as shown in fig. 3, comprises:
the path planning unit 100 is configured to acquire a planned path entering a rescue site based on a priori map, and send an operation instruction to the chassis control system based on the planned path, so that the rescue robot enters an accident scene according to the planned path according to the operation instruction;
the rescue data acquisition unit 200 is used for acquiring environmental data and real-time image data of an accident scene, sensing environmental data of a disaster scene in real time through a multi-classifier fusion algorithm, and transmitting the environmental data and the real-time image data to a command center;
and the rescue instruction receiving unit 300 is used for receiving a rescue strategy formulated by the command center according to the real-time image data and completing field investigation and material supply based on the rescue strategy.
In an actual use scenario, as shown in fig. 4, the system includes, from a hardware set-up, a field controller (upper computer), an RGB-D image sensor, a 3D laser sensor, a thermal imager, an ultrasonic sensor, and a nine-axis attitude sensor, which are in communication connection with the field controller to implement data transmission, and the sensors send respective detected data to the field controller. The field controller integrates and transmits the data to the command center, receives a control instruction fed back by the command center, controls a single chip microcomputer control panel (a lower computer) based on the control instruction, and then respectively drives a motor I, a motor II, a motor III and a motor IV through a plurality of driving plates (respectively controlling the motor I, the motor II, the motor III and the motor IV).
In the above specific embodiment, the rescue robot-based site survey and material supply system provided by the invention obtains a planned path to a rescue site based on a prior map, and sends an operation instruction to the chassis control system based on the planned path, so that the rescue robot enters an accident site according to the planned path according to the operation instruction; acquiring environmental data and real-time picture data of an accident scene, and transmitting the environmental data and the real-time picture data to a command center; and receiving a rescue strategy formulated by the command center according to the real-time image data, and completing field investigation and material supply based on the rescue strategy. The rescue robot realizes fixed-point transportation when the supply of rescue instruments, goods and medicines is insufficient in the rescue process; meanwhile, the rescue robot serving as special rescue equipment can reach a plurality of designated positions independently to provide corresponding support and cooperate with rescuers to complete rescue tasks. As a professional rescue goods supply robot, for goods needing special preservation, the robot provides conditions of constant temperature, refrigeration, sealing and the like, and the related goods are guaranteed to be lost. And rescue personnel accessible voice operation robot if need appoint article, if assign the instruction after, the matter storage lattice just can be opened automatically, guarantees taking immediately of article. Therefore, the technical problems of on-site exploration and low material supply efficiency of the rescue robot in the prior art are solved.
The invention also provides a terminal device, the device comprises: the system comprises a data acquisition unit, a processor and a memory;
the data acquisition unit is used for acquiring data; the memory is to store one or more program instructions; the processor is configured to execute one or more program instructions to perform the method as described above.
In correspondence with the above embodiments, embodiments of the present invention also provide a computer storage medium containing one or more program instructions therein. Wherein the one or more program instructions are for executing the method as described above by a binocular camera depth calibration system.
It is to be understood that the terminology used herein is for the purpose of describing particular example embodiments only, and is not intended to be limiting. As used herein, the singular forms "a", "an" and "the" may be intended to include the plural forms as well, unless the context clearly indicates otherwise. The terms "comprises," "comprising," "including," and "having" are inclusive and therefore specify the presence of stated features, steps, operations, elements, and/or components, but do not preclude the presence or addition of one or more other features, steps, operations, elements, components, and/or groups thereof. The method steps, processes, and operations described herein are not to be construed as necessarily requiring their performance in the particular order described or illustrated, unless specifically identified as an order of performance. It should also be understood that additional or alternative steps may be used.
Although the terms first, second, third, etc. may be used herein to describe various elements, components, regions, layers and/or sections, these elements, components, regions, layers and/or sections should not be limited by these terms. These terms may be only used to distinguish one element, component, region, layer or section from another region, layer or section. Terms such as "first," "second," and other numerical terms when used herein do not imply a sequence or order unless clearly indicated by the context. Thus, a first element, component, region, layer or section discussed below could be termed a second element, component, region, layer or section without departing from the teachings of the example embodiments.
In an embodiment of the invention, the processor may be an integrated circuit chip having signal processing capability. The Processor may be a general purpose Processor, a Digital Signal Processor (DSP), an Application Specific Integrated Circuit (ASIC), a Field Programmable Gate Array (FPGA) or other programmable logic device, discrete Gate or transistor logic device, discrete hardware component.
The various methods, steps and logic blocks disclosed in the embodiments of the present invention may be implemented or performed. A general purpose processor may be a microprocessor or the processor may be any conventional processor or the like. The steps of the method disclosed in connection with the embodiments of the present invention may be directly implemented by a hardware decoding processor, or implemented by a combination of hardware and software modules in the decoding processor. The software module may be located in ram, flash memory, rom, prom, or eprom, registers, etc. storage media as is well known in the art. The processor reads the information in the storage medium and completes the steps of the method in combination with the hardware.
The storage medium may be a memory, for example, which may be volatile memory or nonvolatile memory, or which may include both volatile and nonvolatile memory.
The nonvolatile Memory may be a Read-Only Memory (ROM), a Programmable ROM (PROM), an Erasable PROM (EPROM), an Electrically Erasable PROM (EEPROM), or a flash Memory.
The volatile Memory may be a Random Access Memory (RAM) which serves as an external cache. By way of example and not limitation, many forms of RAM are available, such as Static Random Access Memory (SRAM), Dynamic RAM (DRAM), Synchronous DRAM (SDRAM), Double Data Rate SDRAM (DDRSDRAM), Enhanced SDRAM (ESDRAM), SLDRAM (SLDRAM), and Direct Rambus RAM (DRRAM).
The storage media described in connection with the embodiments of the invention are intended to comprise, without being limited to, these and any other suitable types of memory.
Those skilled in the art will appreciate that the functionality described in the present invention may be implemented in a combination of hardware and software in one or more of the examples described above. When software is applied, the corresponding functionality may be stored on or transmitted over as one or more instructions or code on a computer-readable medium. Computer-readable media includes both computer storage media and communication media including any medium that facilitates transfer of a computer program from one place to another. A storage media may be any available media that can be accessed by a general purpose or special purpose computer.
The above embodiments are only for illustrating the embodiments of the present invention and are not to be construed as limiting the scope of the present invention, and any modifications, equivalent substitutions, improvements and the like made on the basis of the embodiments of the present invention shall be included in the scope of the present invention.

Claims (12)

1. A rescue robot-based site exploration and material supply method is characterized by comprising the following steps:
acquiring a planned path entering a rescue place based on a prior map, and sending an operation instruction to a chassis control system based on the planned path so that the rescue robot enters an accident scene according to the planned path according to the operation instruction;
acquiring environmental data and real-time picture data of an accident scene, and operating a multi-mode prediction algorithm by using a scene edge computing unit to complete scene environment investigation; then, according to the identification result of the dangerous object and the depth information of the visual sensor, dangerous index evaluation of the dangerous object is carried out, the dangerous index evaluation is displayed in a real-time picture, and environmental data and the real-time picture data are transmitted to a command center;
the command center formulates a rescue strategy according to the real-time image data, and finishes the moving target and the material supply of the robot based on the rescue strategy;
the field robot automatically navigates to a target position to realize material supply according to a rescue task and a path planning algorithm, adopts different traveling speeds according to different danger indexes, and can re-plan a path when high danger exists.
2. The method according to claim 1, wherein the obtaining of the planned path to the rescue site based on the prior map and the sending of the operation instruction to the chassis control system based on the planned path are performed so that the rescue robot enters the accident site according to the planned path according to the operation instruction, specifically comprising:
establishing an optimal collision-free path from a starting point to a target point according to a prior map;
sending an operation instruction to a chassis control system based on the optimal collision-free path so that the rescue robot can enter an accident scene according to the operation instruction and the optimal collision-free path;
meanwhile, in the process of advancing to the accident site, real-time obstacle avoidance is carried out on dynamic obstacles and the like in the optimal collision-free path, and an obstacle avoidance method with the minimum proposed danger index is adopted in the obstacle avoidance process.
3. The method according to claim 2, wherein the establishing an optimal collision-free path from the starting point to the target point according to the prior map specifically comprises:
in a pre-established three-dimensional prior map, marking a plurality of positions between a starting point and a target position where an accident scene is located at random according to actual rescue needs, and acquiring three-dimensional coordinate information corresponding to each position;
sequencing the obtained three-dimensional coordinates through the sequence numbers, and determining other sequence numbers connected with the sequence numbers in each sequence number;
and selecting the path with the shortest distance from all the listed paths meeting the conditions as the optimal collision-free path.
4. The method according to claim 3, wherein the selecting a shortest distance path among all the listed eligible paths as the optimal collision-free path specifically comprises:
searching all paths between the starting point and the end point by using a global search algorithm;
removing the large number of steps;
and calculating the path length of each stage, and selecting the shortest path as the optimal collision-free path after accumulation.
5. The method according to claim 1, wherein the receiving command center establishes a rescue strategy according to the real-time image data, and performs on-site exploration and material supply based on the rescue strategy, and the method specifically comprises:
adding a control instruction containing the rescue strategy in a source code file of the voice instruction;
and receiving the voice command, and finishing on-site exploration, material inquiry, map information inquiry and rescue material taking and supplying based on a rescue strategy in the voice command.
6. The method of claim 1, wherein the obtaining environmental data and real-time visual data of the incident scene further comprises:
and carrying out example segmentation on the acquired real-time picture data such as personnel, dangerous objects and the like.
7. The method according to claim 6, wherein the segmenting the acquired real-time image data into instances of people, dangerous objects and the like specifically comprises:
setting the acquired real-time picture data as an RGB image T, and performing depth mapping S on the image T;
constructing a feature matrix R (T, S) of each RoI region:
R(T,S)=[a(t)*b(s)]
wherein a (t), b(s) respectively represent RGB and depth data;
setting a gate function G by the following formula1
Figure FDA0003190389340000021
In the formula (I), the compound is shown in the specification,
Figure FDA0003190389340000022
is a weight parameter;
setting a gate function G2Carrying out backup on the error area discarded by the error;
constructing a feature map by providing an original feature map using the following formula
Figure FDA0003190389340000023
Figure FDA0003190389340000031
In the formula (I), the compound is shown in the specification,
Figure FDA0003190389340000032
in order to be a weight parameter, the weight parameter,
Figure FDA0003190389340000033
is the same as depth map b(s);
the data information to be retained is filtered by the following formula:
Figure FDA0003190389340000034
bonding of
Figure FDA0003190389340000035
And
Figure FDA0003190389340000036
obtaining an output result P(i)
8. The method of claim 7, wherein the segmented image data is used to identify people, dangerous objects, etc., and the multi-modal prediction system comprises two steps: first, the same data set D is usedtrainTraining to obtain model parameters of each prediction model; next, each prediction method will use the parameters found in the first step to target data set D to be predictedPredict-setMaking predictions, in which step, for each trajectory in D and each prediction time range of interest Δ T, each predictor is used to generate an estimate of the future position for
Figure FDA0003190389340000037
Represents; then, the prediction result of each method is used to calculate the weight of each prediction value, and the weights are used for defining rules of a plurality of prediction systems for selecting which prediction methods are used;
for a single classifier module, training is firstly to supervised learning to train model parameters of each prediction method of the multi-modal prediction system; assuming that there are n prediction methods available, for each method i 1iLet us order
Figure FDA0003190389340000038
miThe number of adjustable parameters of the method i; limiting the possible values of the independent parameters to a given discrete set, the goal being for each
Figure FDA0003190389340000039
Identifying a set of allocation parameters to minimize a training error for each prediction method;
because the accuracy of the prediction period may vary significantly with the variation of the prediction time horizon; considering a set of discrete prediction time ranges, with length k, defined as H; we denote each time range value as hPWherein P1.. k; the aim is therefore to find the most suitable parameter value assignment Z for each time range in Hi(ii) a If the total training error of the method i is given by the time range hPThen parameter assignment ZiAnd input data D is defined as EiThe assignment of parameters to a given discrete set can achieve the highest performance of the method and time horizon, defined as follows:
Figure FDA00031903893400000310
thus, the training process will yield k × n set assignments
Figure FDA0003190389340000041
Defining D as a single track contained in D,
Figure FDA0003190389340000042
is a set of time steps of the trajectory d for which a prediction can be generated by a predictor i, training an error EiDefined as the average of these prediction errors:
Figure FDA0003190389340000043
wherein
Figure FDA0003190389340000044
Is a prediction of predictor i, which is a parameter value ZiA function of (a);
fusing classification algorithms, training a plurality of classifiers, and obtaining predictions of a classifier model and a predictor i; then, performing fusion operation on the results of a plurality of predictors by using an information fusion technology based on a D-S evidence theory and a Dempster combination rule on the similar articles, and selecting the article with the highest accuracy as the fused target information from the calculated results to output an identification result;
for the output results of multiple classifiers, D-S evidence theory, let Θ be { θ ═ θ1,θ2,…,θnDenotes all possible sets in the rescue scene environment, θiIs the conclusion drawn by the system; the basic probability distribution function, the confidence function and the likelihood function in the D-S evidence theory are respectively defined as follows:
Figure FDA0003190389340000047
Figure FDA0003190389340000045
Figure FDA0003190389340000046
in equation (3), A is a hypothesis in the recognition framework,
Figure FDA0003190389340000048
m (A) is a basic probability distribution function; in formula (4), bel (a) is the sum of the probabilities of the primary distributions of all subsets a, and pl (a) is the sum of the probabilities of the primary distributions of all intersecting subsets; since BELs are independent of the unified recognition framework, they can be grouped into a subset 2Θ→[0,1]Any conflict can be quantified using the combination rule of Dempster; for all
Figure FDA0003190389340000049
And given n probability distribution functions m1,m2,...mnDempster's rule is calculated using equations (6) and (7):
Figure FDA0003190389340000051
Figure FDA0003190389340000052
where-K represents the conflict measure of the belief function;
while the goods in the on-site environment are identified, the dangerous goods grade D of the goods is given by the expert databaselevel(0-1)。
9. The method as claimed in claim 8, wherein the path planning unit is used for obtaining a planned path for entering the rescue site based on the prior map and sending an operation instruction to the chassis control system based on the planned path, so that the rescue robot enters the accident scene according to the planned path according to the operation instruction, and the robot speed control method based on the danger index is adopted for the motion control of the robot;
the robot controller evaluates a danger index according to the identification result of the dangerous object and the depth information of the vision sensor; different dangerous goods can present different danger indexes when keeping the same distance with the robot; suppose that the coordinate position of The Central Point (TCP) of the robot in the global coordinate system is as
Figure FDA0003190389340000053
Figure FDA0003190389340000054
Velocity vtcpBraking time of Tb(ii) a If the edge coordinate value of the ith dangerous goods in the environment is
Figure FDA0003190389340000055
Corresponding to velocity vdangerThen the danger threshold may be defined as equation (8), which is also the minimum distance between the robot and the ith dangerous object;
Figure FDA0003190389340000056
wherein D is0Is a tuning constant;
to ensure safety, the mobile robot should satisfy equation (9) in the next time step;
Figure FDA0003190389340000057
equation (9) defines a hazardous area in the rescue area; when no dangerous objects exist in the dangerous area, the robot can operate the maximum power; it is clear that it is possible to use,
Figure FDA0003190389340000058
varies with the speed of the robot; defining the position of the robot in the global map as RmVelocity vmIn conjunction with the definition of the hazard threshold, the hazard threshold calculation can be performed by equation (10):
Figure FDA0003190389340000061
for an area with i dangerous goods, the minimum interval between the robot and the dangerous goods should satisfy the formula (11);
Figure FDA0003190389340000062
calculating the risk indexes of all dangerous goods in the area at the current robot position, wherein the risk index of any dangerous goods i in the dangerous area is an expression (12) according to the definition;
Figure FDA0003190389340000063
according to the danger index, the running speed v of the robot is measuredcThe control of (c) may be expressed as:
Figure FDA0003190389340000064
wherein, f(s)danger-slow) Representing the relationship between the risk index and the robot velocity in equations (8), (9), (10) and (11).
10. A rescue robot-based site survey and material supply system for implementing the method of any one of claims 1-9, the system comprising:
the rescue data acquisition unit is used for acquiring environmental data and real-time image data of an accident scene and transmitting the environmental data and the real-time image data to the command center;
and the rescue instruction receiving unit is used for receiving a rescue strategy formulated by the command center according to the real-time image data and completing field investigation and material supply based on the rescue strategy.
11. A terminal device, characterized in that the apparatus comprises: the system comprises a data acquisition unit, a processor and a memory;
the data acquisition unit is used for acquiring data; the memory is to store one or more program instructions; the processor, configured to execute one or more program instructions to perform the method of any of claims 1-9.
12. A computer-readable storage medium having one or more program instructions embodied therein for performing the method of any of claims 1-9.
CN202110876153.2A 2021-07-30 2021-07-30 On-site investigation and material supply method, system and equipment based on rescue robot Active CN113532440B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110876153.2A CN113532440B (en) 2021-07-30 2021-07-30 On-site investigation and material supply method, system and equipment based on rescue robot

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110876153.2A CN113532440B (en) 2021-07-30 2021-07-30 On-site investigation and material supply method, system and equipment based on rescue robot

Publications (2)

Publication Number Publication Date
CN113532440A true CN113532440A (en) 2021-10-22
CN113532440B CN113532440B (en) 2025-01-17

Family

ID=78090006

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110876153.2A Active CN113532440B (en) 2021-07-30 2021-07-30 On-site investigation and material supply method, system and equipment based on rescue robot

Country Status (1)

Country Link
CN (1) CN113532440B (en)

Cited By (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114243701A (en) * 2022-01-27 2022-03-25 西安交通大学 A post-disaster maintenance dispatch method and system for the main network
CN114779795A (en) * 2022-06-21 2022-07-22 山东金宇信息科技集团有限公司 Accident dredging method, equipment and medium based on rail robot
CN115331116A (en) * 2022-10-13 2022-11-11 潍坊绘圆地理信息有限公司 On-board fuzzy reasoning method for accurate identification of ground targets based on multimodal data
CN116167729A (en) * 2023-04-26 2023-05-26 内江市感官密码科技有限公司 Campus patrol method, device, equipment and medium based on artificial intelligence
CN117140534A (en) * 2023-10-27 2023-12-01 锐驰激光(深圳)有限公司 Control method of mining robot, mining robot and storage medium
CN118886580A (en) * 2024-07-22 2024-11-01 江苏苏亿盟智能科技有限公司 Cooperative control method and system for robots

Citations (15)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20070156286A1 (en) * 2005-12-30 2007-07-05 Irobot Corporation Autonomous Mobile Robot
CN203449309U (en) * 2013-08-26 2014-02-26 吉林大学 After-calamity detecting rescue robot
CN105944266A (en) * 2016-06-25 2016-09-21 公安部上海消防研究所 Small-sized reconnaissance robot for fire fighting
CN206664744U (en) * 2017-04-14 2017-11-24 武汉科技大学 A kind of post-disaster search and rescue intelligent vehicle
US9844877B1 (en) * 2015-07-14 2017-12-19 X Development Llc Generating a parameter for a movement characteristic for a waypoint trained path of a robot
CN108596382A (en) * 2018-04-18 2018-09-28 中国地质大学(武汉) Rescue path planing method based on a lot of points, point more to be rescued, multiple terminals
CN108759853A (en) * 2018-06-15 2018-11-06 浙江国自机器人技术有限公司 A kind of robot localization method, system, equipment and computer readable storage medium
CN109144062A (en) * 2018-08-22 2019-01-04 佛山科学技术学院 A kind of danger rescue robot paths planning method
CN110673603A (en) * 2019-10-31 2020-01-10 郑州轻工业学院 A fire field autonomous navigation reconnaissance robot
WO2020077535A1 (en) * 2018-10-16 2020-04-23 深圳大学 Image semantic segmentation method, computer device, and storage medium
US20200241573A1 (en) * 2017-10-03 2020-07-30 Micware Co., Ltd. Route generation device, moving body, and program
CN111482972A (en) * 2020-03-19 2020-08-04 季华实验室 A fire extinguishing and disaster relief robot and system
CN112556709A (en) * 2020-09-29 2021-03-26 哈尔滨工程大学 Fire rescue robot, rescue assisting system and communication method thereof
CN112589811A (en) * 2020-12-07 2021-04-02 苏州阿甘机器人有限公司 Fire rescue robot and working method thereof
CN112929031A (en) * 2021-01-27 2021-06-08 江苏电子信息职业学院 Method for compressing and transmitting path information of strip-shaped autonomous rescue vehicle in dangerous environment

Patent Citations (15)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20070156286A1 (en) * 2005-12-30 2007-07-05 Irobot Corporation Autonomous Mobile Robot
CN203449309U (en) * 2013-08-26 2014-02-26 吉林大学 After-calamity detecting rescue robot
US9844877B1 (en) * 2015-07-14 2017-12-19 X Development Llc Generating a parameter for a movement characteristic for a waypoint trained path of a robot
CN105944266A (en) * 2016-06-25 2016-09-21 公安部上海消防研究所 Small-sized reconnaissance robot for fire fighting
CN206664744U (en) * 2017-04-14 2017-11-24 武汉科技大学 A kind of post-disaster search and rescue intelligent vehicle
US20200241573A1 (en) * 2017-10-03 2020-07-30 Micware Co., Ltd. Route generation device, moving body, and program
CN108596382A (en) * 2018-04-18 2018-09-28 中国地质大学(武汉) Rescue path planing method based on a lot of points, point more to be rescued, multiple terminals
CN108759853A (en) * 2018-06-15 2018-11-06 浙江国自机器人技术有限公司 A kind of robot localization method, system, equipment and computer readable storage medium
CN109144062A (en) * 2018-08-22 2019-01-04 佛山科学技术学院 A kind of danger rescue robot paths planning method
WO2020077535A1 (en) * 2018-10-16 2020-04-23 深圳大学 Image semantic segmentation method, computer device, and storage medium
CN110673603A (en) * 2019-10-31 2020-01-10 郑州轻工业学院 A fire field autonomous navigation reconnaissance robot
CN111482972A (en) * 2020-03-19 2020-08-04 季华实验室 A fire extinguishing and disaster relief robot and system
CN112556709A (en) * 2020-09-29 2021-03-26 哈尔滨工程大学 Fire rescue robot, rescue assisting system and communication method thereof
CN112589811A (en) * 2020-12-07 2021-04-02 苏州阿甘机器人有限公司 Fire rescue robot and working method thereof
CN112929031A (en) * 2021-01-27 2021-06-08 江苏电子信息职业学院 Method for compressing and transmitting path information of strip-shaped autonomous rescue vehicle in dangerous environment

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
吴海彬 等: "基于危险指数最小化的机器人安全运动规划", 机械工程学报, vol. 51, no. 09, 30 May 2015 (2015-05-30), pages 18 - 27 *
王和旭;谢飞;: "蚁群算法舰船援救物资运输路径二维规划模型", 舰船科学技术, vol. 42, no. 10, 23 May 2020 (2020-05-23), pages 199 - 201 *

Cited By (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114243701A (en) * 2022-01-27 2022-03-25 西安交通大学 A post-disaster maintenance dispatch method and system for the main network
CN114779795A (en) * 2022-06-21 2022-07-22 山东金宇信息科技集团有限公司 Accident dredging method, equipment and medium based on rail robot
CN114779795B (en) * 2022-06-21 2022-09-20 山东金宇信息科技集团有限公司 Accident dredging method, equipment and medium based on rail robot
CN115331116A (en) * 2022-10-13 2022-11-11 潍坊绘圆地理信息有限公司 On-board fuzzy reasoning method for accurate identification of ground targets based on multimodal data
CN116167729A (en) * 2023-04-26 2023-05-26 内江市感官密码科技有限公司 Campus patrol method, device, equipment and medium based on artificial intelligence
CN116167729B (en) * 2023-04-26 2023-06-27 内江市感官密码科技有限公司 Campus patrol method, device, equipment and medium based on artificial intelligence
CN117140534A (en) * 2023-10-27 2023-12-01 锐驰激光(深圳)有限公司 Control method of mining robot, mining robot and storage medium
CN117140534B (en) * 2023-10-27 2024-03-15 锐驰激光(深圳)有限公司 Control method of mining robot, mining robot and storage medium
CN118886580A (en) * 2024-07-22 2024-11-01 江苏苏亿盟智能科技有限公司 Cooperative control method and system for robots

Also Published As

Publication number Publication date
CN113532440B (en) 2025-01-17

Similar Documents

Publication Publication Date Title
CN113532440B (en) On-site investigation and material supply method, system and equipment based on rescue robot
US11682129B2 (en) Electronic device, system and method for determining a semantic grid of an environment of a vehicle
Cherubini et al. Autonomous visual navigation and laser-based moving obstacle avoidance
US8355818B2 (en) Robots, systems, and methods for hazard evaluation and visualization
Lei et al. Deep learning-based complete coverage path planning with re-joint and obstacle fusion paradigm
Sathyamoorthy et al. Convoi: Context-aware navigation using vision language models in outdoor and indoor environments
Alves et al. Localization and navigation of a mobile robot in an office-like environment
Ivanov et al. Software advances using n-agents wireless communication integration for optimization of surrounding recognition and robotic group dead reckoning
Harun et al. Sensor fusion technology for unmanned autonomous vehicles (UAV): A review of methods and applications
Barrera Mobile Robots Navigation
Shanmugavel et al. Collision avoidance and path planning of multiple UAVs using flyable paths in 3D
KR20200063879A (en) Method and system for coverage of multiple mobile robots of environment adaptation type time synchronization based on artificial intelligence
Fragoso et al. Dynamically feasible motion planning for micro air vehicles using an egocylinder
JP7659645B2 (en) Prediction and Planning for Mobile Robots
Valente et al. Evidential SLAM fusing 2D laser scanner and stereo camera
Baudoin et al. View-finder: robotics assistance to fire-fighting services and crisis management
Bui et al. A UAV exploration method by detecting multiple directions with deep learning
Sanyal et al. Asma: An adaptive safety margin algorithm for vision-language drone navigation via scene-aware control barrier functions
Coelho et al. Autonomous uav exploration and mapping in uncharted terrain through boundary-driven strategy
US20230410469A1 (en) Systems and methods for image classification using a neural network combined with a correlation structure
Mattar et al. Mobile robot intelligence based slam features learning and navigation
EP4024155B1 (en) Method, system and computer program product of control of unmanned aerial vehicles
Sriram et al. A hierarchical network for diverse trajectory proposals
Seel et al. Dueling Double Deep Q-Network for indoor exploration in factory environments with an unmanned aircraft system
Francis et al. Real-time multi-obstacle detection and tracking using a vision sensor for autonomous vehicle

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
CB03 Change of inventor or designer information
CB03 Change of inventor or designer information

Inventor after: Bu Xiangli

Inventor after: Li Guan

Inventor after: Wang Di

Inventor after: Liu Yongtao

Inventor after: Yu Teng

Inventor after: Li Renshi

Inventor before: Li Guan

Inventor before: Wang Di

Inventor before: Liu Yongtao

Inventor before: Bu Xiangli

Inventor before: Yu Teng

Inventor before: Li Renshi

GR01 Patent grant
GR01 Patent grant