CN113959446B - Autonomous logistics transportation navigation method for robot based on neural network - Google Patents
Autonomous logistics transportation navigation method for robot based on neural network Download PDFInfo
- Publication number
- CN113959446B CN113959446B CN202111222526.0A CN202111222526A CN113959446B CN 113959446 B CN113959446 B CN 113959446B CN 202111222526 A CN202111222526 A CN 202111222526A CN 113959446 B CN113959446 B CN 113959446B
- Authority
- CN
- China
- Prior art keywords
- neural network
- robot
- layer
- data
- information
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active
Links
- 238000013528 artificial neural network Methods 0.000 title claims abstract description 70
- 238000000034 method Methods 0.000 title claims abstract description 25
- 230000006399 behavior Effects 0.000 claims description 25
- 238000012549 training Methods 0.000 claims description 23
- 238000001514 detection method Methods 0.000 claims description 10
- 230000006870 function Effects 0.000 claims description 10
- 230000033001 locomotion Effects 0.000 claims description 10
- 238000012360 testing method Methods 0.000 claims description 10
- 239000011159 matrix material Substances 0.000 claims description 8
- 239000013598 vector Substances 0.000 claims description 6
- 230000004913 activation Effects 0.000 claims description 4
- 230000008569 process Effects 0.000 claims description 2
- 238000004422 calculation algorithm Methods 0.000 abstract description 10
- 238000013461 design Methods 0.000 abstract description 4
- 210000004556 brain Anatomy 0.000 abstract 1
- 230000010485 coping Effects 0.000 abstract 1
- 238000010586 diagram Methods 0.000 description 4
- 230000009471 action Effects 0.000 description 3
- 238000004364 calculation method Methods 0.000 description 3
- 238000013473 artificial intelligence Methods 0.000 description 2
- 230000008859 change Effects 0.000 description 2
- 230000006855 networking Effects 0.000 description 2
- 210000002569 neuron Anatomy 0.000 description 2
- 238000004088 simulation Methods 0.000 description 2
- 241000254158 Lampyridae Species 0.000 description 1
- 230000009286 beneficial effect Effects 0.000 description 1
- 230000008901 benefit Effects 0.000 description 1
- 230000005540 biological transmission Effects 0.000 description 1
- 210000004027 cell Anatomy 0.000 description 1
- 238000004891 communication Methods 0.000 description 1
- 238000000354 decomposition reaction Methods 0.000 description 1
- 238000011161 development Methods 0.000 description 1
- 230000007613 environmental effect Effects 0.000 description 1
- 238000002474 experimental method Methods 0.000 description 1
- 230000002068 genetic effect Effects 0.000 description 1
- 230000003993 interaction Effects 0.000 description 1
- 238000012986 modification Methods 0.000 description 1
- 230000004048 modification Effects 0.000 description 1
- 238000003062 neural network model Methods 0.000 description 1
- 238000005457 optimization Methods 0.000 description 1
- 239000002245 particle Substances 0.000 description 1
- 238000012546 transfer Methods 0.000 description 1
Classifications
-
- G—PHYSICS
- G01—MEASURING; TESTING
- G01C—MEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
- G01C21/00—Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00
- G01C21/20—Instruments for performing navigational calculations
-
- G—PHYSICS
- G01—MEASURING; TESTING
- G01C—MEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
- G01C21/00—Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00
- G01C21/26—Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 specially adapted for navigation in a road network
- G01C21/34—Route searching; Route guidance
- G01C21/3407—Route searching; Route guidance specially adapted for specific applications
- G01C21/343—Calculating itineraries, i.e. routes leading from a starting point to a series of categorical destinations using a global route restraint, round trips, touristic trips
-
- G—PHYSICS
- G01—MEASURING; TESTING
- G01S—RADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
- G01S13/00—Systems using the reflection or reradiation of radio waves, e.g. radar systems; Analogous systems using reflection or reradiation of waves whose nature or wavelength is irrelevant or unspecified
- G01S13/02—Systems using reflection of radio waves, e.g. primary radar systems; Analogous systems
- G01S13/06—Systems determining position data of a target
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/04—Architecture, e.g. interconnection topology
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/08—Learning methods
- G06N3/084—Backpropagation, e.g. using gradient descent
-
- Y—GENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
- Y02—TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
- Y02T—CLIMATE CHANGE MITIGATION TECHNOLOGIES RELATED TO TRANSPORTATION
- Y02T10/00—Road transport of goods or passengers
- Y02T10/10—Internal combustion engine [ICE] based vehicles
- Y02T10/40—Engine management systems
Landscapes
- Engineering & Computer Science (AREA)
- Remote Sensing (AREA)
- Radar, Positioning & Navigation (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Theoretical Computer Science (AREA)
- General Health & Medical Sciences (AREA)
- Mathematical Physics (AREA)
- Data Mining & Analysis (AREA)
- Evolutionary Computation (AREA)
- Biophysics (AREA)
- Molecular Biology (AREA)
- Computing Systems (AREA)
- General Engineering & Computer Science (AREA)
- Biomedical Technology (AREA)
- Computational Linguistics (AREA)
- Software Systems (AREA)
- Artificial Intelligence (AREA)
- Life Sciences & Earth Sciences (AREA)
- Health & Medical Sciences (AREA)
- Automation & Control Theory (AREA)
- Computer Networks & Wireless Communication (AREA)
- Control Of Position, Course, Altitude, Or Attitude Of Moving Bodies (AREA)
Abstract
The application discloses autonomous logistics transportation navigation method of robot based on neural network belongs to robot navigation control field, and its design main points lie in: the method comprises the following steps: a plurality of laser sensors, radar positioning devices and gyroscopes are mounted on the robot; the laser sensor is used for detecting the distribution of the front obstacle; the radar positioning device and the gyroscope are respectively used for judging the end point direction and the running direction of the robot, and the difference value of the end point direction and the running direction is expressed as the relative advancing direction of the robot; a plurality of black lines and square representing obstacles are randomly combined to form an indoor scene similar to a maze, and the scene of a real storage environment is simulated. Compared with the traditional navigation algorithm, the robot autonomous logistics transportation navigation method based on the neural network does not need to build an environment model, the trained neural network is similar to the brain of a human, makes corresponding decisions according to perceived information, and has the capability of completely coping with emergency.
Description
Technical Field
The application relates to the field of robot transportation navigation, in particular to a robot autonomous logistics transportation navigation method based on a neural network.
Background
Autonomous navigation of a robot in the logistics industry requires that the robot be able to independently reach an end point from a start point and not collide with any obstacle during movement.
Related studies are as follows:
CN113240331a discloses a whole-course intelligent logistics robot system, which comprises a robot automatic running system, a user picking and sending system, a server and a logistics cabinet system, wherein the robot automatic running system is used for controlling the robot to automatically avoid obstacle running and transferring express items to a picking position appointed by the user picking and sending system; the server is used for information interaction between the user pickup and sending system and the robot automatic driving system.
CN107436610a discloses a vehicle and robot carrying navigation method and system in intelligent outdoor environment, the technical key point is that step 1: dividing an outdoor environment space into corresponding building-span transportation areas according to building-span transportation tasks, wherein each building-span transportation area is provided with a corresponding unmanned aerial vehicle; step 2: the unmanned aerial vehicle recognizes the outline of the unmanned aerial vehicle through Kinect; step 3: the unmanned aerial vehicle navigates the carrying robot according to the transportation task, the carrying robot recognizes landmarks carried on the unmanned aerial vehicle in real time through the positioning sensor, and the carrying robot moves forward along with the unmanned aerial vehicle until the carrying robot reaches the transportation task end point; step 4: obstacle appears in unmanned aerial vehicle and the communication of carrying robot, and carrying robot passes through Kinect discernment obstacle profile, calculates the biggest angle of hindering to adjust unmanned aerial vehicle position.
From the above prior art, it is known that: networking and autonomous navigation robots have a good future development potential that can subvert the traditional transportation industry and facilitate the operation of on-demand services and applications. Autonomous navigation capability is extremely important for an internet-connected mobile robot, and is the basis for executing various instructions, so that the autonomous navigation capability is widely focused. The navigation strategies based on path planning can be divided into two main types according to known environmental information, one type is global navigation, also called classical navigation, the surrounding environment is completely mastered before path planning is carried out, and common strategies include a cell decomposition method, a roadmap method, an artificial potential field method and the like, and the methods usually avoid the position of an obstacle in a certain way and simultaneously select a path as short as possible. However, the accuracy of this type of navigation method depends on the accuracy of the model of the environment, and if there is a large deviation between the model and the real scene, serious errors occur in the navigation process, and modeling the environment consumes a lot of computing resources. In addition, the real environment is usually dynamically changed, and the planned path is difficult to cope with the emergency situations occurring in the environment.
In the face of increasingly complex navigation tasks, researchers have proposed related local navigation algorithms that can better address the uncertainty in the environment. Common algorithms include genetic algorithms, fuzzy logic, firefly algorithms, particle swarm optimization algorithms, and the like. The algorithms can react in real time according to the dynamic change of the environment, can search in an unknown environment, and can also realize the multi-robot networking collaborative path planning. While local navigation algorithms are more intelligent, efficient, and easier to implement than classical methods, they may require a high computational load in order to plan the path of movement of the networked robot, which contradicts the performance of existing devices, e.g., microprocessors installed on networked robots may not have so efficient computational load to find the correct path on the fly, resulting in these methods not being suitable for low cost vehicles.
Disclosure of Invention
The purpose of the application is to solve the autonomous navigation problem of the mobile robot in the logistics transportation scene from the angle of artificial intelligence, namely to provide a robot autonomous logistics transportation navigation method based on a neural network. Specifically, two kinds of information of related task scenes and human behaviors are collected through an autonomously built data acquisition platform, and parameters of the neural network are adjusted through the collected information data by designing a proper neural network so that the human behaviors can be re-carved; the neural network model trained by the method can make correct navigation decision according to the scene, and has relatively small real-time calculation load.
The technical scheme of the application is as follows:
a robot autonomous logistics transportation navigation method based on a neural network comprises the following steps:
s1, building a data acquisition platform:
a plurality of laser sensors, radar positioning devices and gyroscopes are mounted on the robot; the laser sensor is used for detecting the distribution of the front obstacle; the radar positioning device and the gyroscope are respectively used for judging the end point direction and the running direction of the robot, and the difference value of the end point direction and the running direction is expressed as the relative advancing direction of the robot;
a plurality of black lines and square representing obstacles are randomly combined to form an indoor scene similar to a maze, and a scene of a real storage environment is simulated;
s2, collecting human data:
controlling the movement of the robot through a computer keyboard; meanwhile, recording distance information of each beam of laser detection, angle information of the robot relative to the advancing direction and human behavior information controlled by a keyboard;
s3, building a neural network
Building a proper neural network according to the input and output information dimension, wherein related parameters comprise the layer number of the neural network, a connection mode, an activation function, a loss function, a training data set capacity, a small batch data capacity, a learning rate and total training times;
s4, training a neural network
Training the neural network by taking the data collected in the step S2 as a training data set; the distance information of each beam of laser detection and the angle information of the robot relative to the advancing direction are used as input layers, and the output layers are human behavior information.
And S5, loading the trained neural network to the robot.
Further, in step S2, distance information of each laser detection, angle information of the robot relative to the advancing direction, and human behavior information controlled by the keyboard are recorded, specifically as follows:
the number of the laser sensors is n, and the distance information data matrix D of each laser detection at the moment of recording t0 … … tm is as follows:
wherein d h,td Representing the distance measured at time td under the h laser sensor;
the matrix V of angle information of the robot relative to the forward direction is:
wherein V is td An angle indicating the relative advancing direction of the robot at the td-th time;
the matrix AT of human behavior information controlled by the keyboard is:
at td the human behavior information can be 1 number value or 1 vector, and corresponds to the human behavior information at td time.
Further, the robot has p wheels, and the matrix of human behavior information is:
wherein at f,td Representing the motion command state of any f-th wheel at any td moment;
further, in step S4, the neural network is trained:
D. v is used as input layer data, AT is used as output layer data, and the neural network is trained:
namely: d, d 1,td ......d h,td ......d n,td ,V td (n+1) dataAt as input layer data 1, td ...at f,td ...at p,td (p data) as output layer data.
Further, n=17, p=3; the neural network structure built by S3 is as follows: a 7-layer neural network, wherein the first-layer node number r 1 Number of second layer nodes r=18 2 Number of third layer nodes r=64 3 Number of fourth layer nodes r=128 4 Number of fifth layer nodes r=64 5 Number of sixth layer nodes r =32 6 Number of seventh layer nodes r=8 7 =3 (corresponding to 3 wheels), wherein the first layer is the input layer and the seventh layer is the output layer; the RReLU () activation function is added by a fully connected 5-layer hidden layer, and the Softmax () function is used by an output layer.
Further, training data set capacity c=640, small batch data capacity n batch 10, learning rate lr=0.0008, total training number n epoch =1000。
Further, the number of the laser sensors is n, an n+1-dimensional vector is formed by the distance detected by n laser beams and the relative advancing angle, the n+1-dimensional vector is used as the input of the neural network, the human behaviors are represented by numbers such as 0, 1, 2 (0 represents left turn, 1 represents advance, 2 represents right turn), the errors between the output value and the label value are calculated in batches, and the weights and the neuron thresholds of the neural network are adjusted through error reverse transmission until all data are extracted. If the neural network cannot be converged, returning to the step (3) to change the parameters of the neural network until the neural network can be converged.
Further, between step S4 and step S5, further comprising: testing the trained neural network: and (3) taking other data different from the step (S4) as a test data set, inputting the training data set and the test data set neural network after each round of training, comparing the output with the human behavior data to obtain the accuracy of the neural network for re-carving human behaviors, if the accuracy exceeds an ideal value, storing the trained neural network, and if the accuracy is lower, returning to the step (S3) for adjusting the neural network parameters.
The beneficial effects of this application lie in:
first, the present application addresses the navigation problem described above from an artificial intelligence perspective, utilizing collected historical information to train a neural network so that it can replicate human behavior to autonomously make the correct decision. The method can be applied to an intelligent warehouse, so that the internet mobile robot can transport goods to a designated position safely.
Secondly, the robot autonomous logistics transportation navigation method based on the neural network, provided by the application, can replicate human behaviors with accuracy as high as 85% through multiple tests. Since the same state information of the robot may correspond to various executable human behaviors, the neural network output of the robot has a little deviation, but the robot is not prevented from successfully completing the navigation task in practice. Thus, the accuracy of the method should be more than 85%, and this data is higher than some other navigation methods in the same scene.
Third, the third advantage of the present application is that: the real-time output of the action information according to the scene information requires a large amount of calculation load, but the application separates training and execution, and the requirement on calculation capability is effectively reduced. Firstly, collecting information through human operation, wherein each group of information is independent, and has low requirements on real-time performance and action continuity; the neural network is trained on the computer through the collected information, the robot is not required to independently install an operation module, and the trained neural network can be used after being loaded on the robot.
Drawings
The present application is described in further detail below in conjunction with the embodiments in the drawings, but is not to be construed as limiting the present application in any way.
Fig. 1 is a design drawing of a mobile robot model of the present application.
Fig. 2 is a design drawing of an indoor navigation scene model of the present application.
Fig. 3 is a structural diagram of a neural network of the present application.
Fig. 4 is a schematic diagram of a robotic collision warning of the present application.
Fig. 5 is a schematic diagram of a robot arrival endpoint prompt of the present application.
Fig. 6 is a schematic diagram of a mobile robot movement path case of the present application.
Detailed Description
An embodiment I, a robot autonomous logistics transportation navigation method based on a neural network, includes the following steps:
the first step: the simulation platform is used for simulating a real scene and comprises a mobile robot and an obstacle.
Among them, a mobile robot is equipped with a laser sensor, a radar positioning device, and a gyroscope. 17 rays represent 17 laser beams as shown in fig. 1, and the arrow in fig. 1 points to the end point direction.
For the obstacle, as shown in fig. 2, a plurality of black lines and squares (in essence, may also be used in other manners) are used to represent the random combination of the obstacle to form a maze-like indoor scene, so as to simulate a narrow and tortuous L-shaped scene frequently occurring in a storage environment.
And a second step of: and collecting human data, so that a human testee can select the moving gesture of the robot in different environments by knocking corresponding direction keys on a mechanical keyboard.
The movement data of the current round robot at each time step is saved, including the detection distance of several laser beams, the length of the distance end point, the relative forward angle of the robot, and the actions taken by the robot in the current state (e.g., forward, in-place left turn, and in-place right turn).
And a third step of: the neural network is built, as shown in fig. 3, to select the neural network structure (the neural network structure design belongs to the prior art and is not described any more).
Fourth step: randomly extracting C pieces from the data collected in the second step as training data set to train the neural network, forming n+1-dimensional vectors by the distance and relative advancing angle of n laser detection as the input of the neural network, and using human as the label to represent the human by integer numbers of 0, 1, 2 and the like, wherein n is extracted each time batch And calculating errors between the output value and the label value by the data, representing the errors by using a loss function, and adjusting each weight and neuron threshold value of the neural network through error reverse transfer until the data are all extracted. If the neural network can not be converged, returning to the third step for repairingThe neural network parameters are changed until the neural network is able to converge. Can select 7 layers of neural network (r 1 =18,r 2 =64,r 3 =128,r 4 =64,r 5 =32,r 6 =8,r 7 =3), the rrehu () activation function is added with the full connection form, the 5-layer hidden layer, and the Softmax () function is used by the output layer. Specific parameter setting: cross entropy loss function, training data set capacity c=640, small batch data capacity n batch 10, learning rate lr=0.0008, total training number n epoch =1000. Through experiments, the parameter settings meet the requirements of the neural network training results.
Fifth step: and after each training round is finished, respectively using the training data set and the testing data set as the neural network input, comparing the output with the human behavior data to obtain the accuracy of the neural network for re-carving the human behavior, if the accuracy exceeds an ideal value, storing the trained neural network, and if the accuracy is lower, returning to the third step for adjusting the neural network parameters.
Sixth step: the autonomous robot motion simulation platform also supports the navigation algorithm test, manual operation is changed into algorithm output (namely, the neural network is adopted), and the running state of the robot can be clearly observed by the platform. If a collision occurs during the movement of the robot, the interface will appear in the form of a "CollisionWarning" word, as shown in fig. 4. When the robot successfully reaches the end point, the system prompts "Task Complete" and the length from the end point is always displayed in the navigation interface as shown in fig. 5. The map interface in the platform can completely present the path the robot walks through. Repeated tests are carried out on the trained neural network on the data acquisition platform, the distance information can be observed to be continuously reduced, warning information can not appear, the moving path of the robot can be seen to be smooth and relatively short after the task is completed, and various moving paths of the robot are shown in fig. 6.
The above examples are preferred embodiments of the present application, and are merely for convenience of explanation, not limitation, and any person having ordinary skill in the art shall make local changes or modifications by using the technical disclosure of the present application without departing from the technical features of the present application, and all the embodiments still fall within the scope of the technical features of the present application.
Claims (1)
1. The autonomous logistics transportation navigation method of the robot based on the neural network is characterized by comprising the following steps of:
s1, building a data acquisition platform:
n laser sensors, a radar positioning device and a gyroscope are mounted on the robot; the laser sensor is used for detecting the distribution of the front obstacle; the radar positioning device and the gyroscope are respectively used for determining the advancing direction of the robot;
constructing an indoor scene;
s2, collecting human data:
controlling the movement of the robot through a computer;
in the process, the distance information of each beam of laser detection, the angle information of the robot relative to the advancing direction and the human behavior information controlled by a keyboard are recorded simultaneously;
in step S2, distance information of each laser detection, angle information of the robot relative to the advancing direction, and human behavior information controlled by a keyboard are recorded, specifically as follows:
the number of the laser sensors is n, and the distance information data matrix D of each laser detection at the moment of recording t0 … … tm is as follows:
wherein d h,td Representing the distance measured at time td under the h laser sensor;
the matrix V of angle information of the robot relative to the forward direction is:
wherein V is td An angle indicating the relative advancing direction of the robot at the td-th time;
the matrix AT of human behavior information controlled by the keyboard is:
at td is 1 number value or 1 vector, which corresponds to human behavior information at td time;
the robot has p wheels, and the matrix of human behavior information is:
wherein at f,td Representing the motion command state of any f-th wheel at any td moment;
s3, building a neural network;
s4, training a neural network: training the neural network by taking the data collected in the step S2 as a training data set; the distance information of each beam of laser detection and the angle information of the robot relative to the advancing direction are used as input layers, and the output layers are human behavior information;
in step S4, the neural network is trained:
D. v is used as input layer data, AT is used as output layer data, and the neural network is trained:
namely: d, d 1,td ......d h,td ......d n,td 、V td At as input layer data 1,td ...at f,td ...at p,td As output layer data;
n=11, p=3; the neural network structure built by S3 is as follows: a 7-layer neural network, wherein the first-layer node number r 1 Number of second layer nodes r=18 2 Number of third layer nodes r=64 3 Number of fourth layer nodes r=128 4 Number of fifth layer nodes r=64 5 Number of nodes of sixth layer =32r 6 Number of seventh layer nodes r=8 7 =3, wherein the first layer is an input layer, and the seventh layer is an output layer; adding RReLU () activation function by adopting a full connection form and a 5-layer hidden layer, and using Softmax () function by an output layer;
s5, loading the trained neural network onto a robot, wherein the robot carries out autonomous logistics transportation navigation based on the loaded neural network;
training data set capacity c=640, small batch data capacity n batch 10, learning rate lr=0.0008, total training number n epoch =1000;
The number of the laser sensors is n, and n-dimensional vectors are formed by the distance detected by n laser beams and the relative advancing angle and used as the input of the neural network;
between step S4 and step S5, further comprising: testing the trained neural network: and (3) taking other data different from the step (S4) as a test data set, inputting the training data set and the test data set neural network after each round of training, comparing the output with the human behavior data to obtain the accuracy of the neural network for re-carving human behaviors, if the accuracy exceeds an ideal value, storing the trained neural network, and if the accuracy is lower, returning to the step (S3) for adjusting the neural network parameters.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202111222526.0A CN113959446B (en) | 2021-10-20 | 2021-10-20 | Autonomous logistics transportation navigation method for robot based on neural network |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202111222526.0A CN113959446B (en) | 2021-10-20 | 2021-10-20 | Autonomous logistics transportation navigation method for robot based on neural network |
Publications (2)
Publication Number | Publication Date |
---|---|
CN113959446A CN113959446A (en) | 2022-01-21 |
CN113959446B true CN113959446B (en) | 2024-01-23 |
Family
ID=79464911
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202111222526.0A Active CN113959446B (en) | 2021-10-20 | 2021-10-20 | Autonomous logistics transportation navigation method for robot based on neural network |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN113959446B (en) |
Families Citing this family (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN116048018B (en) * | 2022-12-23 | 2024-06-21 | 深圳优地科技有限公司 | Cross-building scheduling method and device for robot, terminal equipment and storage medium |
CN118408551B (en) * | 2024-06-28 | 2024-10-29 | 张家港江苏科技大学产业技术研究院 | Unmanned aerial vehicle navigation method and system based on laser signal navigator |
Citations (8)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US7211980B1 (en) * | 2006-07-05 | 2007-05-01 | Battelle Energy Alliance, Llc | Robotic follow system and method |
CN104777839A (en) * | 2015-04-16 | 2015-07-15 | 北京工业大学 | BP neural network and distance information-based robot autonomous obstacle avoiding method |
WO2018117872A1 (en) * | 2016-12-25 | 2018-06-28 | Baomar Haitham | The intelligent autopilot system |
US10032111B1 (en) * | 2017-02-16 | 2018-07-24 | Rockwell Collins, Inc. | Systems and methods for machine learning of pilot behavior |
CN110632931A (en) * | 2019-10-09 | 2019-12-31 | 哈尔滨工程大学 | Mobile robot collision avoidance planning method based on deep reinforcement learning in dynamic environment |
WO2021101561A1 (en) * | 2019-11-22 | 2021-05-27 | Siemens Aktiengesellschaft | Sensor-based construction of complex scenes for autonomous machines |
CN112857370A (en) * | 2021-01-07 | 2021-05-28 | 北京大学 | Robot map-free navigation method based on time sequence information modeling |
CN112873211A (en) * | 2021-02-24 | 2021-06-01 | 清华大学 | Robot man-machine interaction method |
Family Cites Families (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US10621448B2 (en) * | 2017-08-02 | 2020-04-14 | Wing Aviation Llc | Systems and methods for determining path confidence for unmanned vehicles |
US10739775B2 (en) * | 2017-10-28 | 2020-08-11 | Tusimple, Inc. | System and method for real world autonomous vehicle trajectory simulation |
US11740630B2 (en) * | 2018-06-12 | 2023-08-29 | Skydio, Inc. | Fitness and sports applications for an autonomous unmanned aerial vehicle |
US20210165413A1 (en) * | 2018-07-26 | 2021-06-03 | Postmates Inc. | Safe traversable area estimation in unstructured free-space using deep convolutional neural network |
US12077190B2 (en) * | 2020-05-18 | 2024-09-03 | Nvidia Corporation | Efficient safety aware path selection and planning for autonomous machine applications |
-
2021
- 2021-10-20 CN CN202111222526.0A patent/CN113959446B/en active Active
Patent Citations (8)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US7211980B1 (en) * | 2006-07-05 | 2007-05-01 | Battelle Energy Alliance, Llc | Robotic follow system and method |
CN104777839A (en) * | 2015-04-16 | 2015-07-15 | 北京工业大学 | BP neural network and distance information-based robot autonomous obstacle avoiding method |
WO2018117872A1 (en) * | 2016-12-25 | 2018-06-28 | Baomar Haitham | The intelligent autopilot system |
US10032111B1 (en) * | 2017-02-16 | 2018-07-24 | Rockwell Collins, Inc. | Systems and methods for machine learning of pilot behavior |
CN110632931A (en) * | 2019-10-09 | 2019-12-31 | 哈尔滨工程大学 | Mobile robot collision avoidance planning method based on deep reinforcement learning in dynamic environment |
WO2021101561A1 (en) * | 2019-11-22 | 2021-05-27 | Siemens Aktiengesellschaft | Sensor-based construction of complex scenes for autonomous machines |
CN112857370A (en) * | 2021-01-07 | 2021-05-28 | 北京大学 | Robot map-free navigation method based on time sequence information modeling |
CN112873211A (en) * | 2021-02-24 | 2021-06-01 | 清华大学 | Robot man-machine interaction method |
Non-Patent Citations (5)
Title |
---|
刘爽 ; 朱国栋 ; .基于操作者表现的机器人遥操作方法.机器人.2018,(第04期), * |
在未知环境下基于递阶模糊行为的移动机器人控制算法;李寿涛, 李元春;吉林大学学报(工学版)(第04期);全文 * |
基于操作者表现的机器人遥操作方法;刘爽;朱国栋;;机器人(第04期) * |
基于改进模糊算法的移动机器人自主避障研究;胡静波;陈定方;吴俊峰;梅杰;李波;;自动化与仪表(第06期) * |
胡静波 ; 陈定方 ; 吴俊峰 ; 梅杰 ; 李波 ; .基于改进模糊算法的移动机器人自主避障研究.自动化与仪表.2018,(第06期), * |
Also Published As
Publication number | Publication date |
---|---|
CN113959446A (en) | 2022-01-21 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN111780777B (en) | Unmanned vehicle route planning method based on improved A-star algorithm and deep reinforcement learning | |
CN110989576B (en) | Target following and dynamic obstacle avoidance control method for differential slip steering vehicle | |
Zhang et al. | 2D Lidar‐Based SLAM and Path Planning for Indoor Rescue Using Mobile Robots | |
Faust et al. | Prm-rl: Long-range robotic navigation tasks by combining reinforcement learning and sampling-based planning | |
Tai et al. | Towards cognitive exploration through deep reinforcement learning for mobile robots | |
Ross et al. | Learning monocular reactive uav control in cluttered natural environments | |
Grigorescu et al. | Neurotrajectory: A neuroevolutionary approach to local state trajectory learning for autonomous vehicles | |
Botteghi et al. | On reward shaping for mobile robot navigation: A reinforcement learning and SLAM based approach | |
CN113848974B (en) | Aircraft trajectory planning method and system based on deep reinforcement learning | |
Chen et al. | Robot navigation with map-based deep reinforcement learning | |
El Ferik et al. | A Behavioral Adaptive Fuzzy controller of multi robots in a cluster space | |
CN113959446B (en) | Autonomous logistics transportation navigation method for robot based on neural network | |
Guo et al. | A fusion method of local path planning for mobile robots based on LSTM neural network and reinforcement learning | |
CN116679719A (en) | Unmanned vehicle self-adaptive path planning method based on dynamic window method and near-end strategy | |
Al Dabooni et al. | Heuristic dynamic programming for mobile robot path planning based on Dyna approach | |
CN113485323B (en) | Flexible formation method for cascading multiple mobile robots | |
Chen et al. | Deep reinforcement learning of map-based obstacle avoidance for mobile robot navigation | |
Liang et al. | Multi-UAV autonomous collision avoidance based on PPO-GIC algorithm with CNN–LSTM fusion network | |
Rasib et al. | Are Self‐Driving Vehicles Ready to Launch? An Insight into Steering Control in Autonomous Self‐Driving Vehicles | |
Sun et al. | Event-triggered reconfigurable reinforcement learning motion-planning approach for mobile robot in unknown dynamic environments | |
Wu et al. | UAV Path Planning Based on Multicritic‐Delayed Deep Deterministic Policy Gradient | |
Cheng et al. | A cross-platform deep reinforcement learning model for autonomous navigation without global information in different scenes | |
Nayak et al. | A heuristic-guided dynamical multi-rover motion planning framework for planetary surface missions | |
Huang et al. | An autonomous UAV navigation system for unknown flight environment | |
Xu et al. | Automated labeling for robotic autonomous navigation through multi-sensory semi-supervised learning on big data |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |