Disclosure of Invention
The object of the present invention is to provide a rescue robot to at least partially solve the technical problems of the prior art. The purpose is realized by the following technical scheme:
the invention provides a rescue robot-based on-site exploration, safety control and material supply method, which comprises the following steps:
acquiring a planned path entering a rescue place based on a prior map, and sending an operation instruction to a chassis control system based on the planned path so that the rescue robot enters an accident scene according to the planned path according to the operation instruction;
and acquiring environmental data and real-time picture data of an accident site, and operating a multi-mode prediction algorithm by using a site edge computing unit to complete site environment investigation. And then, according to the identification result of the dangerous objects and the depth information of the visual sensor, dangerous index evaluation of the dangerous objects is carried out, and the dangerous indexes are displayed in a real-time picture. Transmitting the environment data and the real-time picture data to a command center;
the command center formulates a rescue strategy according to the real-time image data, and finishes the moving target and the material supply of the robot based on the rescue strategy;
the field robot automatically navigates to a target position to realize material supply according to a rescue task and a path planning algorithm, adopts different traveling speeds according to different danger indexes, and can re-plan a path when high danger exists;
further, the acquiring a planned path entering a rescue place based on a prior map and sending an operation instruction to a chassis control system based on the planned path so that the rescue robot enters an accident scene according to the planned path according to the operation instruction specifically includes:
establishing an optimal collision-free path from a starting point to a target point according to a prior map;
sending an operation instruction to a chassis control system based on the optimal collision-free path so that the rescue robot can enter an accident scene according to the operation instruction and the optimal collision-free path;
meanwhile, in the process of advancing to the accident site, dynamic obstacles and the like in the optimal collision-free path are avoided in real time.
Further, the establishing an optimal collision-free path from a starting point to a target point according to the prior map specifically includes:
in a pre-established three-dimensional prior map, marking a plurality of positions between a starting point and a target position where an accident scene is located at random according to actual rescue needs, and acquiring three-dimensional coordinate information corresponding to each position;
sequencing the obtained three-dimensional coordinates through the sequence numbers, and determining other sequence numbers connected with the sequence numbers in each sequence number;
and selecting the path with the shortest distance from all the listed paths meeting the conditions as the optimal collision-free path.
Further, the selecting a path with the shortest distance from all listed paths meeting the conditions as the optimal collision-free path specifically includes:
searching all paths between the starting point and the end point by using a global search algorithm;
removing the large number of steps;
and calculating the path length of each stage, and selecting the shortest path as the optimal collision-free path after accumulation.
Further, the receiving command center makes a rescue strategy according to the real-time image data, and completes on-site exploration and material supply based on the rescue strategy, and the receiving command center specifically comprises:
adding a control instruction containing the rescue strategy in a source code file of the voice instruction;
and receiving the voice command, and finishing on-site exploration, material inquiry, map information inquiry and rescue material taking and supplying based on a rescue strategy in the voice command.
Further, the acquiring environmental data and real-time image data of the accident scene further includes:
and carrying out example segmentation on the acquired real-time picture data such as personnel, dangerous objects and the like.
Further, the example segmentation of the acquired real-time image data such as personnel and dangerous objects specifically includes:
setting the acquired real-time picture data as an RGB image T, and performing depth mapping S on the image T;
constructing a feature matrix R (T, S) of each RoI region:
R(T,S)=[a(t)*b(s)]
wherein a (t), b(s) respectively represent RGB and depth data;
setting a gate function G by the following formula1:
In the formula (I), the compound is shown in the specification,
is a weight parameter;
setting a gate function G2Carrying out backup on the error area discarded by the error;
constructing a feature map by providing an original feature map using the following formula
In the formula,
In order to be a weight parameter, the weight parameter,
is the same as depth map b(s);
the data information to be retained is filtered by the following formula:
bonding of
And
obtaining an output result P
(i)。
The segmented image data is used to identify people, dangerous objects and the like. A multi-classifier fusion prediction method is designed and utilized, and a multi-mode prediction system comprises two steps: first, the same data set D is used
trainTraining is performed to obtain model parameters for each prediction model. Next, each prediction method will use the parameters found in the first step to target data set D to be predicted
Predict-setMaking predictions, in which step, for each trajectory in D and each prediction time range of interest Δ T, each predictor is used to generate an estimate of the future position for
And (4) showing. The prediction results for each method are then used to calculate weights for each prediction value, which in turn are used to define rules for multiple prediction systems to select which prediction methods should be used.
For a single classifier module, the training is first of all for supervised learning to train each prediction method of the multi-modal prediction systemAnd (4) model parameters. Assuming that there are n prediction methods available, for each method i 1
iLet us order
m
iIs the number of adjustable parameters of method i. Limiting the possible values of the independent parameters to a given discrete set, the goal being for each
A set of allocation parameters is identified to minimize the training error for each prediction method.
As the accuracy of the prediction period may vary significantly as the prediction time horizon varies. Consider a set of discrete prediction time ranges, of length k, defined as H. We denote each time range value as hPWherein P ═ 1.. k. The aim is therefore to find the most suitable parameter value assignment Z for each time range in Hi. If the total training error of the method i is given by the time range hPThen parameter assignment ZiAnd input data D is defined as EiThe assignment of parameters to a given discrete set can achieve the highest performance of the method and time horizon, defined as follows:
thus, the training process will yield k × n set assignments
Defining D as a single track contained in D,
is a set of time steps of the trajectory d for which a prediction can be generated by a predictor i, training an error E
iDefined as the average of these prediction errors:
wherein
Is a prediction of predictor i, which is a parameter value Z
iAs a function of (c).
For the output results of multiple classifiers, D-S evidence theory, let Θ be { θ ═ θ1,θ2,…,θnDenotes all possible sets in the rescue scene environment, θiIs the conclusion the system draws. The basic probability distribution function, the confidence function and the likelihood function in the D-S evidence theory are respectively defined as follows:
in equation (3), A is a hypothesis in the recognition framework,
m (A) is a basic probability distribution function. In equation (4), Bel (A) is the sum of the probabilities of the primary distributions of all subsets A, and Pl (A) is the sum of the probabilities of the primary distributions of all intersecting subsets. Since BELs are independent of the unified recognition framework, they can be grouped into a subset 2
Θ→[0,1]Any collisions can be quantified using the combination rule of Dempster. For all
And given n probability distribution functions m
1,m
2,...m
nThe Dempster rule isEquations (6) and (7) calculate:
where-K represents the conflict measure of the belief function.
And fusing the classification algorithms, and training a plurality of classifiers to obtain a classifier model. And then, performing fusion operation on the results of the plurality of classifiers by using a Dempster combination rule on the similar articles by using an information fusion technology based on a D-S evidence theory, and selecting the highest accuracy from the calculated results as the fused target information to output an identification result.
The invention also provides a rescue robot-based site investigation and material supply system for implementing the method as described above, the system comprising:
the path planning unit is used for acquiring a planned path entering a rescue place based on a prior map and sending an operation instruction to the chassis control system based on the planned path so that the rescue robot can enter an accident scene according to the planned path according to the operation instruction;
the rescue data acquisition unit is used for acquiring environmental data and real-time image data of an accident scene and transmitting the environmental data and the real-time image data to the command center;
and the rescue instruction receiving unit is used for receiving a rescue strategy formulated by the command center according to the real-time image data and completing field investigation and material supply based on the rescue strategy.
The invention also provides a terminal device, the device comprises: the system comprises a data acquisition unit, a processor and a memory;
the data acquisition unit is used for acquiring data; the memory is to store one or more program instructions; the processor is configured to execute one or more program instructions to perform the method as described above.
The present invention also provides a computer readable storage medium having embodied therein one or more program instructions for executing the method as described above.
According to the rescue robot-based site investigation and material supply method provided by the invention, a planned path entering a rescue place is obtained based on a prior map, and an operation instruction is sent to a chassis control system based on the planned path, so that the rescue robot enters an accident site according to the planned path according to the operation instruction; acquiring environmental data and real-time picture data of an accident scene, and transmitting the environmental data and the real-time picture data to a command center; and receiving a rescue strategy formulated by the command center according to the real-time image data, and completing field investigation and material supply based on the rescue strategy. The rescue robot realizes fixed-point transportation when the supply of rescue instruments, goods and medicines is insufficient in the rescue process; meanwhile, the rescue robot serving as special rescue equipment can reach a plurality of designated positions independently to provide corresponding support and cooperate with rescuers to complete rescue tasks. As a professional rescue goods supply robot, for goods needing special preservation, the robot provides conditions of constant temperature, refrigeration, sealing and the like, and the related goods are guaranteed to be lost. And rescue personnel accessible voice operation robot if need appoint article, if assign the instruction after, the matter storage lattice just can be opened automatically, guarantees taking immediately of article. Therefore, the technical problems of on-site exploration and low material supply efficiency of the rescue robot in the prior art are solved.
Detailed Description
Exemplary embodiments of the present disclosure will be described in more detail below with reference to the accompanying drawings. While exemplary embodiments of the present disclosure are shown in the drawings, it should be understood that the present disclosure may be embodied in various forms and should not be limited to the embodiments set forth herein. Rather, these embodiments are provided so that this disclosure will be thorough and complete, and will fully convey the scope of the disclosure to those skilled in the art.
The rescue robot-based on-site investigation and material supply method provided by the invention aims at the problems of disordered on-site, difficulty in investigation, difficulty in material supply and the like, and solves the problems of difficulty and untimely rescue on-site investigation and material supply in accident rescue by combining knowledge in the fields of artificial intelligence, synchronous positioning and mapping technology, voice recognition, mechanical engineering and the like.
The method is based on a rescue robot, and the robot mainly comprises a 3D laser sensor, an RGB-D image sensor, a nine-axis attitude sensor, an ultrasonic sensor, a thermal imager, a field controller, a touch screen, an intelligent storage cabinet, a chassis, an aluminum alloy shell and the like on hardware. In the aspect of structural design, a 3D laser sensor is carried at the top of the robot and used for collecting point cloud information in the surrounding environment in real time. And an RGB-D image sensor and a thermal imager are respectively carried at the front position and the rear position, and the RGB-D image sensor can be used for acquiring depth visual information and real-time picture information in the environment. The thermal imager is used for finding potential overheating hazards of trapped personnel, equipment and the like in an accident scene, and the command center can make an effective rescue scheme according to the collected related data by integrating the information.
In one embodiment, the present invention provides a rescue robot-based site exploration and material supply method, as shown in fig. 1, the method comprising the steps of:
s1: and acquiring a planned path entering a rescue place based on a prior map, and sending an operation instruction to a chassis control system based on the planned path so that the rescue robot enters an accident scene according to the planned path according to the operation instruction.
Specifically, in step S1, when planning a path through a prior map, an optimal collision-free path from a starting point to a target point is established according to the prior map; and then, sending an operation instruction to a chassis control system based on the optimal collision-free path so that the rescue robot can enter an accident scene according to the operation instruction and the optimal collision-free path. Meanwhile, in the process of advancing to the accident site, dynamic obstacles and the like in the optimal collision-free path are avoided in real time.
The establishing of an optimal collision-free path from a starting point to a target point according to the prior map specifically includes:
in a pre-established three-dimensional prior map, randomly marking a plurality of positions between a starting point and a target position where an accident site is located, and acquiring three-dimensional coordinate information corresponding to each position;
sequencing the obtained three-dimensional coordinates through the sequence numbers, and determining other sequence numbers connected with the sequence numbers in each sequence number;
and selecting the path with the shortest distance from all the listed paths meeting the conditions as the optimal collision-free path. Specifically, searching all paths between a starting point and an end point by using a global search algorithm; removing the large number of steps; and calculating the path length of each stage, and selecting the shortest path as the optimal collision-free path after accumulation.
That is to say, in the route planning and navigation system, firstly, the global route planning is to establish an optimal collision-free route from a starting point to a target point according to a prior map, and then, in the aspect of local route planning, the dynamic obstacles and the like in the route are avoided in real time. At present, the more used global path planning algorithms include A-star and Dijkstra algorithms, and the local path planning algorithms include TEB, DWA algorithms and the like. On the basis of utilizing Dijkstra algorithm and DWA algorithm, the invention aims at the problem that the prior autonomous mobile robot can only realize autonomous navigation of two target points, and realizes a method for autonomous navigation of multiple target points through improvement of a path planning algorithm. The algorithm may set the end point of the robot and a number of other target points to be passed by before reaching the end point. After algorithm optimization, the shortest time is used when the robot reaches the end point after passing through all target points.
The specific operation process can be divided into the following steps: in the established three-dimensional map, a plurality of positions are randomly marked according to specific conditions, and the positions can be rescue positions, intersections, corners and the like in the map and are uniformly distributed as much as possible. Because each position has corresponding three-dimensional coordinate information, the target positions can be sorted by adding sequence numbers according to the three-dimensional coordinates. Then, the serial number connected to each serial number is determined. Before the navigation function is started, the terminal and the rescue position to which the robot needs to go on the way are determined, and then the algorithm can automatically list all paths meeting the conditions and select a path with the shortest distance.
The specific implementation of the algorithm mainly comprises the following steps: and in the path planning stage, searching all paths between the starting point and the end point by using a global search algorithm, removing the paths with a large number of steps, calculating the path length of each stage, and selecting the shortest path as a final result after accumulation. In the navigation stage, the algorithm of the mobile _ base function package in the ROS is improved, a python type node is established, and a callback mechanism in a subscriber is utilized, so that a callback function transmits a pointer pointing to an action message. After receiving the message published by the topics cmd _ vel, move _ base/global _ plan, move _ base/good and odom _ rf2o, the callback function is called. At this time, the robot sequentially passes through each sequence number area according to the planned path. After acquiring Twist messages issued by cmd _ vel topics, the Base Controller node further controls the lower computer to drive the robot to move. If a plurality of positions need to be surveyed or rescue goods and materials are provided for a plurality of target positions, the task time can be effectively saved, and the rescue efficiency is improved. When the reconnaissance task is executed, in order to increase the task execution efficiency, the robot plans a path from the current position to a target point of the next sequence number area when the robot reaches 10m of the next sequence number area, so that the robot keeps uniform speed running among the sequence number areas, and the rescue efficiency is ensured.
S2: acquiring environmental data and real-time picture data of an accident scene, and transmitting the environmental data and the real-time picture data to a command center; specifically, a control instruction containing the rescue strategy is added to a source code file of a voice instruction; and receiving the voice command, and finishing on-site exploration, material inquiry, map information inquiry and rescue material taking and supplying based on a rescue strategy in the voice command.
In the actual use process, the robot action is controlled through voice recognition, for example, a voice SDK packet of science university news can be correspondingly modified, and a corresponding control instruction is added into a source code file according to the actual use condition. Through using the voice function, the rescue personnel can directly control the robot, inquire goods and materials, map information, take rescue goods and materials in the storage grid and the like.
S3: and receiving a rescue strategy formulated by the command center according to the real-time image data, and completing field investigation and material supply based on the rescue strategy.
In order to improve the picture quality, the acquiring environmental data and real-time picture data of the accident scene further includes:
and carrying out example segmentation on the acquired real-time picture data such as personnel, dangerous objects and the like.
The example segmentation of personnel, dangerous objects and the like is carried out on the acquired real-time image data, and the method specifically comprises the following steps:
setting the acquired real-time picture data as an RGB image T, and performing depth mapping S on the image T;
constructing a feature matrix R (T, S) of each Roi area:
R(T,S)=[a(t)*b(s)]
wherein a (t), b(s) respectively represent RGB and depth data;
setting a gate function G by the following formula1:
In the formula (I), the compound is shown in the specification,
is a weight parameter;
setting a gate function G2Carrying out backup on the error area discarded by the error;
constructing a feature map by providing an original feature map using the following formula
In the formula (I), the compound is shown in the specification,
in order to be a weight parameter, the weight parameter,
is the same as depth map b(s);
the data information to be retained is filtered by the following formula:
bonding of
And
obtaining an output result P
(i)。
That is, in object recognition and segmentation, the present invention performs example segmentation of people, dangerous objects, and the like using RGBD image data acquired by a depth vision sensor. The application effect of the method is better indoors, but in special scenes such as accident scenes, the phenomenon that the depth image is fuzzy occurs, the difference between two adjacent example objects is still small, and even after depth normalization is performed, the example segmentation effect is poor. Therefore, in summary of the above problems, we use a residual compensation mechanism and merge into the current more general example segmentation framework Mask RCNN in an end-to-end manner.
Specifically, the RGB image T and the corresponding depth map S are input in the structure, and the feature matrix R (T, S) of each RoI region is first constructed.
R(T,S)=[a(t)*b(s)]
In the formula, a (t), b(s) respectively represent RGB and depth data. To enhance the instantiation effect with the depth data. Since the depth vision sensor has a certain range when processing depth data, the original depth information may contain noise, which will affect the final prediction effect, and a filtering process is required to optimize the data information in the depth map. Thus, a gate function G is set1And the method is used for extracting judgment information of instance boundaries and the like in the depth map and suppressing noise areas.
In the formula (I), the compound is shown in the specification,
is a weight parameter. However, during this filtration process, there may be some G-quilt
1The important areas discarded erroneously, so that some are G's using a compensation mechanism
1The erroneously discarded critical area will have the opportunity to be backed up in the next cell, setting the gate function G
2Backup of error regions discarded by errors, 1-G
1It is used to screen important regions therein. G
2And G
1Similarly, a feature map is constructed by providing an original feature map, with the difference in the associated weight parameters
To achieve information integrity.
In the formula (I), the compound is shown in the specification,
in order to be a weight parameter, the weight parameter,
is the same as depth map b(s). The data information to be retained is then filtered.
Finally, combining
And
obtaining an output result P
(i)。
The segmented image data is used to identify people, dangerous objects and the like. A multi-classifier fusion prediction method is designed and utilized, and a multi-mode prediction system comprises two steps: first, the same data set D is used
trainTraining is performed to obtain model parameters for each prediction model. Next, each prediction method will use the parameters found in the first step to target data set D to be predicted
Predict-setA prediction is made, in this step, for each trajectory in D and for each prediction time range of interest deltat,each predictor is arranged to generate an estimate of a future position
And (4) showing. The prediction results for each method are then used to calculate weights for each prediction value, which in turn are used to define rules for multiple prediction systems to select which prediction methods should be used.
For a single classifier module, its training is first supervised learning to train the model parameters for each prediction method of the multi-modal prediction system. Assuming that there are n prediction methods available, for each method i 1
iLet us order
m
iIs the number of adjustable parameters of method i. Limiting the possible values of the independent parameters to a given discrete set, the goal being for each
A set of allocation parameters is identified to minimize the training error for each prediction method.
As the accuracy of the prediction period may vary significantly as the prediction time horizon varies. Consider a set of discrete prediction time ranges, of length k, defined as H. We denote each time range value as hPWherein P ═ 1.. k. The aim is therefore to find the most suitable parameter value assignment Z for each time range in Hi. If the total training error of the method i is given by the time range hPThen parameter assignment ZiAnd input data D is defined as EiThe assignment of parameters to a given discrete set can achieve the highest performance of the method and time horizon, defined as follows:
thus, the training process will yield k × n set assignments
Defining D as a single track contained in D,
is a set of time steps of the trajectory d for which a prediction can be generated by a predictor i, training an error E
iDefined as the average of these prediction errors:
wherein
Is a prediction of predictor i, which is a parameter value Z
iAs a function of (c).
For the output results of multiple classifiers, D-S evidence theory, let Θ be { θ ═ θ1,θ2,…,θnDenotes all possible sets in the rescue scene environment, θiIs the conclusion the system draws. The basic probability distribution function, the confidence function and the likelihood function in the D-S evidence theory are respectively defined as follows:
in equation (3), A is a hypothesis in the recognition framework,
m (A) is a basic probability distribution function. In the formula (4), Bel(A) Is the sum of the probabilities of the primary distributions of all subsets a, and pl (a) is the sum of the probabilities of the primary distributions of all intersecting subsets. Since BELs are independent of the unified recognition framework, they can be grouped into a subset 2
Θ→[0,1]Any collisions can be quantified using the combination rule of Dempster. For all
And given n probability distribution functions m
1,m
2,...m
nDempster's rule is calculated using equations (6) and (7):
where-K represents the conflict measure of the belief function.
And fusing the classification algorithms, and training a plurality of classifiers to obtain a classifier model. And then, performing fusion operation on the results of the plurality of classifiers by using a Dempster combination rule on the similar articles by using an information fusion technology based on a D-S evidence theory, and selecting the highest accuracy from the calculated results as the fused target information to output an identification result.
Next, the description of the overall scheme is performed in combination with the actual rescue scene, as shown in fig. 2, when an accident such as a fire or an explosion occurs, before rescue is performed, the first thing to do is to survey the internal environment of the accident site, detect a hazard, search and rescue trapped people, and the like, so that rescue workers can be reasonably arranged to enter, and rescue the trapped people to safely evacuate. Before rescue workers enter, the robot first replaces the rescue workers to preliminarily know the accident situation, the command center remotely controls the robot to enter the scene, and a three-dimensional map of the accident scene is established. And uploading the three-dimensional map to a command center, and the command center can roughly master the overall condition of the accident through the three-dimensional map.
Meanwhile, the RGB-D image sensors at the front end and the rear end of the robot collect real-time picture information on site. And (3) carrying out example segmentation on the real-time scene by using a deep learning algorithm, and identifying buildings, vehicles, personnel, flames, various devices and the like on the scene. And dangerous marks are carried out on inflammable, explosive and other dangerous objects, and the danger level is judged. The thermal imaging camera can display the thermal imaging picture on site in real time. By integrating the three-dimensional map information, the command center can specifically master the overall situation of the accident site.
In the process, the robot can effectively find the trapped person. When the trapped person is detected in the real-time picture, the command center can immediately arrange the rescue personnel for rescue. The shielded trapped people can be found in time through the thermal imaging graph, and in addition, some equipment which possibly has overheating hidden dangers can also be found and removed in time.
By utilizing the established three-dimensional map, the robot can realize autonomous navigation according to the map,
in the aspect of dangerous object danger index evaluation, a danger index evaluation method based on dangerous object types and distances is designed and utilized, and the danger index is evaluated according to the identification result of dangerous objects and the depth information of the visual sensor. Different hazardous materials may exhibit different hazard indices while maintaining the same distance from the robot. Suppose that the coordinate position of The Central Point (TCP) of the robot in the global coordinate system is as
Velocity v
tcpBraking time of T
b. If the coordinate value of the dangerous object is
Corresponding to velocity v
dangerThen danger thresholdThe value may be defined as equation (8), which is also the minimum distance between the robot and the hazard.
To ensure safety, the robot should satisfy equation (9) in the next time step.
|Ho-Rt|>Dt-d (9)
Equation (9) defines the dangerous area in the rescue area. When no dangerous objects exist in the dangerous zone, the robot can operate the maximum power. It is clear that Dt-dVarying with the speed of the robot. Defining the position of the robot in the global map as RmVelocity vmIn conjunction with the definition of the hazard threshold, the hazard threshold calculation can be performed by equation (10):
the danger index of the dangerous goods in the rescue area is calculated, and can be expressed as formula (11) according to the definition.
According to the danger index, the running speed v of the robot is measuredcThe control of (c) may be expressed as:
wherein, f(s)danger-sslow) Representing the relationship between the risk index and the robot speed in equations (8), (9) and (10).
In the process of rescuing the rescuers in real time, if an emergency situation occurs, rescue goods and materials need to be arranged immediately, at the moment, the robot can automatically go to a target area only by determining the positions of the rescuers in the three-dimensional map through the command center. When meeting the barrier in the period, the robot can avoid autonomously, if the problem of road blockage occurs, the robot can plan the path again, and the rescue task is guaranteed to be completed smoothly. In special cases, the command center can arrange the robot to go to a plurality of target areas at one time, and by utilizing a multipoint positioning navigation algorithm, the shortest path and the minimum time are ensured when the robot reaches the target areas.
After the robot arrives at the target area, the rescue personnel can talk with the robot through voice, and after the goods and materials demand is provided, the storage case can be automatically opened for the rescue personnel to take rapidly, and the specific goods and materials information can be displayed on the touch screen. In addition, the rescue personnel can check the position of the rescue personnel through the touch screen, and when the rescue personnel want to go to the next rescue place or leave the scene, the robot can be operated to assist the rescue personnel to quickly go to the next rescue place or leave the scene. If the rescuer is not aware of the particular mode of operation. The command center can remotely operate the robot to help the rescue workers to complete related operations.
After the task is finished, the robot can automatically return to a command center or execute a related patrol task according to specific conditions. When the patrol task is executed, the command center marks a plurality of target areas through which the robot passes in the three-dimensional map, and then the robot can autonomously plan an optimal path through all the target areas. In the patrol process, the command center can observe various things in the accident site in real time by using the object recognition function of the robot, and for undiscovered trapped persons, dangerous articles and the like, the specific conditions are fed back to the command center in time, and the command center can take action immediately according to the feedback information.
In the above specific embodiment, the rescue robot-based site exploration and material supply method provided by the invention obtains a planned path to a rescue site based on a prior map, and sends an operation instruction to a chassis control system based on the planned path, so that the rescue robot enters an accident site according to the planned path according to the operation instruction; acquiring environmental data and real-time picture data of an accident scene, and transmitting the environmental data and the real-time picture data to a command center; and receiving a rescue strategy formulated by the command center according to the real-time image data, and completing field investigation and material supply based on the rescue strategy. The rescue robot realizes fixed-point transportation when the supply of rescue instruments, goods and medicines is insufficient in the rescue process; meanwhile, the rescue robot serving as special rescue equipment can reach a plurality of designated positions independently to provide corresponding support and cooperate with rescuers to complete rescue tasks. As a professional rescue goods supply robot, for goods needing special preservation, the robot provides conditions of constant temperature, refrigeration, sealing and the like, and the related goods are guaranteed to be lost. And rescue personnel accessible voice operation robot if need appoint article, if assign the instruction after, the matter storage lattice just can be opened automatically, guarantees taking immediately of article. Therefore, the technical problems of on-site exploration and low material supply efficiency of the rescue robot in the prior art are solved.
In addition to the above method, the present invention also provides a rescue robot-based site exploration and material supply system for implementing the above method, which, in one embodiment, as shown in fig. 3, comprises:
the path planning unit 100 is configured to acquire a planned path entering a rescue site based on a priori map, and send an operation instruction to the chassis control system based on the planned path, so that the rescue robot enters an accident scene according to the planned path according to the operation instruction;
the rescue data acquisition unit 200 is used for acquiring environmental data and real-time image data of an accident scene, sensing environmental data of a disaster scene in real time through a multi-classifier fusion algorithm, and transmitting the environmental data and the real-time image data to a command center;
and the rescue instruction receiving unit 300 is used for receiving a rescue strategy formulated by the command center according to the real-time image data and completing field investigation and material supply based on the rescue strategy.
In an actual use scenario, as shown in fig. 4, the system includes, from a hardware set-up, a field controller (upper computer), an RGB-D image sensor, a 3D laser sensor, a thermal imager, an ultrasonic sensor, and a nine-axis attitude sensor, which are in communication connection with the field controller to implement data transmission, and the sensors send respective detected data to the field controller. The field controller integrates and transmits the data to the command center, receives a control instruction fed back by the command center, controls a single chip microcomputer control panel (a lower computer) based on the control instruction, and then respectively drives a motor I, a motor II, a motor III and a motor IV through a plurality of driving plates (respectively controlling the motor I, the motor II, the motor III and the motor IV).
In the above specific embodiment, the rescue robot-based site survey and material supply system provided by the invention obtains a planned path to a rescue site based on a prior map, and sends an operation instruction to the chassis control system based on the planned path, so that the rescue robot enters an accident site according to the planned path according to the operation instruction; acquiring environmental data and real-time picture data of an accident scene, and transmitting the environmental data and the real-time picture data to a command center; and receiving a rescue strategy formulated by the command center according to the real-time image data, and completing field investigation and material supply based on the rescue strategy. The rescue robot realizes fixed-point transportation when the supply of rescue instruments, goods and medicines is insufficient in the rescue process; meanwhile, the rescue robot serving as special rescue equipment can reach a plurality of designated positions independently to provide corresponding support and cooperate with rescuers to complete rescue tasks. As a professional rescue goods supply robot, for goods needing special preservation, the robot provides conditions of constant temperature, refrigeration, sealing and the like, and the related goods are guaranteed to be lost. And rescue personnel accessible voice operation robot if need appoint article, if assign the instruction after, the matter storage lattice just can be opened automatically, guarantees taking immediately of article. Therefore, the technical problems of on-site exploration and low material supply efficiency of the rescue robot in the prior art are solved.
The invention also provides a terminal device, the device comprises: the system comprises a data acquisition unit, a processor and a memory;
the data acquisition unit is used for acquiring data; the memory is to store one or more program instructions; the processor is configured to execute one or more program instructions to perform the method as described above.
In correspondence with the above embodiments, embodiments of the present invention also provide a computer storage medium containing one or more program instructions therein. Wherein the one or more program instructions are for executing the method as described above by a binocular camera depth calibration system.
It is to be understood that the terminology used herein is for the purpose of describing particular example embodiments only, and is not intended to be limiting. As used herein, the singular forms "a", "an" and "the" may be intended to include the plural forms as well, unless the context clearly indicates otherwise. The terms "comprises," "comprising," "including," and "having" are inclusive and therefore specify the presence of stated features, steps, operations, elements, and/or components, but do not preclude the presence or addition of one or more other features, steps, operations, elements, components, and/or groups thereof. The method steps, processes, and operations described herein are not to be construed as necessarily requiring their performance in the particular order described or illustrated, unless specifically identified as an order of performance. It should also be understood that additional or alternative steps may be used.
Although the terms first, second, third, etc. may be used herein to describe various elements, components, regions, layers and/or sections, these elements, components, regions, layers and/or sections should not be limited by these terms. These terms may be only used to distinguish one element, component, region, layer or section from another region, layer or section. Terms such as "first," "second," and other numerical terms when used herein do not imply a sequence or order unless clearly indicated by the context. Thus, a first element, component, region, layer or section discussed below could be termed a second element, component, region, layer or section without departing from the teachings of the example embodiments.
In an embodiment of the invention, the processor may be an integrated circuit chip having signal processing capability. The Processor may be a general purpose Processor, a Digital Signal Processor (DSP), an Application Specific Integrated Circuit (ASIC), a Field Programmable Gate Array (FPGA) or other programmable logic device, discrete Gate or transistor logic device, discrete hardware component.
The various methods, steps and logic blocks disclosed in the embodiments of the present invention may be implemented or performed. A general purpose processor may be a microprocessor or the processor may be any conventional processor or the like. The steps of the method disclosed in connection with the embodiments of the present invention may be directly implemented by a hardware decoding processor, or implemented by a combination of hardware and software modules in the decoding processor. The software module may be located in ram, flash memory, rom, prom, or eprom, registers, etc. storage media as is well known in the art. The processor reads the information in the storage medium and completes the steps of the method in combination with the hardware.
The storage medium may be a memory, for example, which may be volatile memory or nonvolatile memory, or which may include both volatile and nonvolatile memory.
The nonvolatile Memory may be a Read-Only Memory (ROM), a Programmable ROM (PROM), an Erasable PROM (EPROM), an Electrically Erasable PROM (EEPROM), or a flash Memory.
The volatile Memory may be a Random Access Memory (RAM) which serves as an external cache. By way of example and not limitation, many forms of RAM are available, such as Static Random Access Memory (SRAM), Dynamic RAM (DRAM), Synchronous DRAM (SDRAM), Double Data Rate SDRAM (DDRSDRAM), Enhanced SDRAM (ESDRAM), SLDRAM (SLDRAM), and Direct Rambus RAM (DRRAM).
The storage media described in connection with the embodiments of the invention are intended to comprise, without being limited to, these and any other suitable types of memory.
Those skilled in the art will appreciate that the functionality described in the present invention may be implemented in a combination of hardware and software in one or more of the examples described above. When software is applied, the corresponding functionality may be stored on or transmitted over as one or more instructions or code on a computer-readable medium. Computer-readable media includes both computer storage media and communication media including any medium that facilitates transfer of a computer program from one place to another. A storage media may be any available media that can be accessed by a general purpose or special purpose computer.
The above embodiments are only for illustrating the embodiments of the present invention and are not to be construed as limiting the scope of the present invention, and any modifications, equivalent substitutions, improvements and the like made on the basis of the embodiments of the present invention shall be included in the scope of the present invention.