[go: up one dir, main page]
More Web Proxy on the site http://driver.im/

CN114952839A - Cloud edge cooperation-based two-stage robot motion decision technology framework - Google Patents

Cloud edge cooperation-based two-stage robot motion decision technology framework Download PDF

Info

Publication number
CN114952839A
CN114952839A CN202210584920.7A CN202210584920A CN114952839A CN 114952839 A CN114952839 A CN 114952839A CN 202210584920 A CN202210584920 A CN 202210584920A CN 114952839 A CN114952839 A CN 114952839A
Authority
CN
China
Prior art keywords
robot
cloud
edge
motion
stage
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202210584920.7A
Other languages
Chinese (zh)
Other versions
CN114952839B (en
Inventor
郭鹏
王梓鹏
汪世杰
史海超
张笑菀
汪健强
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Southwest Jiaotong University
Original Assignee
Southwest Jiaotong University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Southwest Jiaotong University filed Critical Southwest Jiaotong University
Priority to CN202210584920.7A priority Critical patent/CN114952839B/en
Publication of CN114952839A publication Critical patent/CN114952839A/en
Application granted granted Critical
Publication of CN114952839B publication Critical patent/CN114952839B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • BPERFORMING OPERATIONS; TRANSPORTING
    • B25HAND TOOLS; PORTABLE POWER-DRIVEN TOOLS; MANIPULATORS
    • B25JMANIPULATORS; CHAMBERS PROVIDED WITH MANIPULATION DEVICES
    • B25J9/00Programme-controlled manipulators
    • B25J9/16Programme controls
    • B25J9/1656Programme controls characterised by programming, planning systems for manipulators
    • B25J9/1664Programme controls characterised by programming, planning systems for manipulators characterised by motion, path, trajectory planning
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B25HAND TOOLS; PORTABLE POWER-DRIVEN TOOLS; MANIPULATORS
    • B25JMANIPULATORS; CHAMBERS PROVIDED WITH MANIPULATION DEVICES
    • B25J9/00Programme-controlled manipulators
    • B25J9/16Programme controls
    • B25J9/1602Programme controls characterised by the control system, structure, architecture
    • B25J9/1605Simulation of manipulator lay-out, design, modelling of manipulator
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02PCLIMATE CHANGE MITIGATION TECHNOLOGIES IN THE PRODUCTION OR PROCESSING OF GOODS
    • Y02P80/00Climate change mitigation technologies for sector-wide applications
    • Y02P80/10Efficient use of energy, e.g. using compressed air or pressurized fluid as energy carrier
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02PCLIMATE CHANGE MITIGATION TECHNOLOGIES IN THE PRODUCTION OR PROCESSING OF GOODS
    • Y02P90/00Enabling technologies with a potential contribution to greenhouse gas [GHG] emissions mitigation
    • Y02P90/02Total factory control, e.g. smart factories, flexible manufacturing systems [FMS] or integrated manufacturing systems [IMS]

Landscapes

  • Engineering & Computer Science (AREA)
  • Robotics (AREA)
  • Mechanical Engineering (AREA)
  • Automation & Control Theory (AREA)
  • Feedback Control In General (AREA)
  • Numerical Control (AREA)

Abstract

The invention relates to a two-stage robot motion decision technical framework based on cloud edge cooperation. A set of working process and decision mechanism of a robot control system are established, a basic technical framework for realizing two-stage mixed decision based on cloud-edge cooperation is determined, and the storage and calculation requirements on a single robot are reduced, so that the construction cost is reduced on the whole, the equipment efficiency is improved, and the reference value is provided for the enterprise to cooperatively optimize the control performance and the calculation power distribution of the robot based on the cloud edge.

Description

Cloud edge cooperation-based two-stage robot motion decision technology framework
Technical Field
The invention belongs to the field of robot cluster computing performance optimization in the field of cloud edge coordination, relates to a method for simplifying a robot motion control process and improving the utilization efficiency of system computing resources for a cloud edge coordination system, and particularly relates to a two-stage robot motion decision-making technical framework based on cloud edge coordination.
Background
With the extensive research on smart warehousing and smart factory scenarios, how to optimize the existing task processing and reduce the waste caused by repetitive calculation is an important issue. In 2019, the rise of edge computing has led to a research trend of how to improve the actual operation effect of the existing smart factory method and reduce the energy consumption cost thereof by using the concept of cloud-edge cooperation, wherein there is a great demand for improvement particularly in the repetitive robot motion control link. At present, the following defects exist in the field of robot motion control:
1. the movement track is generated by the robot, and after the central computer gives a task instruction, the robot automatically searches for an optimal path according to an airborne map and an airborne radar to cause repeated calculation. The N computers need to repeat the work of map loading, map scanning, map modeling, map storage and map updating for N times, and energy and computing power are seriously wasted.
2. The robot is seriously dependent on the processing capacity of a central computer, all links of actual equipment of part of enterprises are put on the computer, and the operation is carried out in a cloud computing mode, so that a large amount of data flows between cloud sides to form a huge data load, the robot delay is high, and the operation effect is poor.
The robot usually adopts an IMU (inertial navigation unit) mileage calculation method, the signal processing process is extremely rough, data processing based on an optimized numerical integration method is lacked, and the acquisition time and the acquisition precision are seriously insufficient in the operation process, so that auxiliary calibration of a plurality of radars is required, and additional cost is caused.
Disclosure of Invention
The invention mainly relates to several key technologies, mainly comprising path planning, robot dynamics modeling, motion control under a given path, and sensor information reading and processing technology for motion control. In order to realize the optimization of the computing performance of the intelligent factory robot cluster, the invention provides a two-stage motion decision-making technical framework.
The method comprises the following steps:
step 1: a PC with better computing performance is used as a cloud, and the cloud runs a decision task in a motion planning stage. A path search program based on the RRT algorithm is written using Python, and destination point and start point coordinates are input thereto according to the task.
Step 2: and the cloud end adopts the RRT script to calculate the coordinates of the optimal path from the starting point to the end point according to a map which is input in advance and is only stored in the cloud end.
And step 3: and the cloud sends the coordinates of all coordinate points on the task path to a robot in the workshop in a list form, and the robot is used as an edge end to execute a decision task in a motion control stage.
And 4, step 4: and establishing a robot kinematic model according to the robot physical model.
And 5: and establishing a robot electric control characteristic test curve according to the robot electric elements so as to determine the actual effect of the control command.
And 6: it is determined from the robot master control board how the IMU employed by the robot should read its real-time data.
And 7: and establishing a displacement expression of the robot according to IMU real-time data of the robot and an improved numerical integration method.
And 8: and establishing a PID control system and a control flow according to the electric control characteristic, the dynamic characteristic and the perception characteristic of the robot.
And step 9: and (4) splitting the list sent to the robot in the step (3) into a series of coordinates to be accessed, and determining a robot motion instruction according to the relationship between the starting point and the front and back coordinates in the list.
Step 10: the robot takes the PID algorithm to achieve the desired motion effect based on the motion command and returns to step 9 to access the next coordinate until the endpoint is reached. The robot finishes the task of the motion control decision stage, and the whole cloud side system finishes one-time complete two-stage motion decision.
Compared with the prior art, the invention has the beneficial effects that:
and establishing a realization framework combining robot motion decision and task planning decision under cloud edge cooperation. According to the traditional concept, the cloud end is only responsible for issuing tasks, the local computing or cloud computing mode that the specific tasks are completely executed and delivered to the robot is adjusted into the cloud edge computing mode that the robot and the host respectively bear part of the movement tasks, the repeated links or links with public characteristics are placed on public equipment, and each private robot is only responsible for the part which needs to be placed on the computing, so that the requirements of various complex tasks on the performance of the robot are reduced. The method can provide reference for enterprises to establish a high-efficiency intelligent production system under cloud edge coordination.
Drawings
FIG. 1 is a path planning demonstration of the present invention;
FIG. 2 is a graph showing the measurement curve and the result of the motor of the present invention;
FIG. 3 is a diagram of a sensor mounting architecture under the IIC protocol of the present invention;
FIG. 4 is a sample diagram of imu real-time data according to the present invention;
FIG. 5 is a flow chart of PID of the present invention.
Detailed Description
The technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are only a part of the embodiments of the present invention, and not all of the embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
In the description of the present invention, it is to be understood that the terms "center", "longitudinal", "lateral", "length", "width", "thickness", "upper", "lower", "front", "rear", "left", "right", "vertical", "horizontal", "top", "bottom", "inner", "outer", "clockwise", "counterclockwise", and the like, indicate orientations and positional relationships based on those shown in the drawings, and are used only for convenience of description and simplicity of description, and do not indicate or imply that the equipment or element being referred to must have a particular orientation, be constructed and operated in a particular orientation, and thus, should not be considered as limiting the present invention.
The invention is described in detail below with reference to specific examples and the attached drawing figures:
step 1: and writing a script based on the improved RRT for solving the path plan at the cloud end.
The RRT algorithm is a path planning method based on random state point sampling, and the core of the RRT algorithm is that a plurality of random coordinate points Qrandom are generated in a map containing a starting point Qint, an end point Qend and an impenetrable obstacle area b (x, y) by random strategy probability p, and finally a feasible path l is formed. The random strategy works in such a way that the probability p determines whether the next random coordinate point is in the random direction or the Qend direction. To ensure that the probability of generating a point is higher the closer the path is to the target point, the following probability strategy may be employed:
Figure BDA0003665545060000041
wherein dst set The method is characterized in that the method is a critical distance manually set during algorithm solving to judge whether Qend is approached or not, and if the Qend is approached, a random strategy is changed to accelerate the approach to a target point. The direct RRT algorithm can not ensure the optimal path, and the decision tree is connected to the Qend to generate a complete path, and then the complete path is subjected to reverse reselection of a father node and rearrangement, so that the path quality can be greatly improved. It is further improved by using two adjacent random points Qrandom [ i ]]And Qrandom [ i +1 ]]Generating line segment interpolation, performing collision detection, optimizing the curve into a straight line, and finally forming a linear interpolation curve:
Figure BDA0003665545060000042
the simplification is as follows:
y·(-Qrandom[i+1][1])-(x·Qrandom[i][0])+Qrandom[i][0]·Qrandom[i+1][1]
=x·(-Qrandom[i+1][0])-(y·Qrandom[i][1])+Qrandom[i][1]·Qrandom[i+1][0]
and finally, obtaining a line segment:
x·(Qrandom[i][0]-Qandom[i+1][0])-y·(Qrandom[i][1]-Qrandom[i+1][1])
=Qrandom[i][1]·Qrandom[i+1][0]-Qrandom[i][0]·Qrandom[i+1][1]
each optimization is carried out, a curve is changed into a straight line, a new shortest straight line is generated by continuously bringing forward a point, but a line segment cannot be formed between two adjacent nodes due to the existence of an obstacle near a part of special anchor points Qm, and at the moment, the RRT-star-smart algorithm changes the value of the strategy probability p to adjust the sampling mode, namely, smart sampling.
Figure BDA0003665545060000043
Wherein p2 is the strategy probability when the algorithm randomly reselects the father node forwards from Qend to the vicinity of the anchor point, and p1 is the strategy probability consistent with that when Qend is searched, which means that the algorithm searches for a better path to the starting point in a random backward mode.
Step 2: inputting coordinates into the script eventually generates a path as in fig. 1, and the resulting path is a list of five path points as shown below.
Float List=[(x 1 ,y 1 ),(x 2 ,y 2 ),(x 3 ,y 3 ),(x 4 ,y 4 ),(x 5 ,y 5 )]
And 3, step 3: and sending the List List to the robot, and storing the List in a List form after the robot receives the List.
String Receive=[“(x 1 ,y 1 ),(x 2 ,y 2 ),(x 3 ,y 3 ),(x 4 ,y 4 ),(x 5 ,y 5 )”]
Where the transmitted list is in the form of floating point numbers and the accepted list is in the form of a string.
And 4, step 4: a robot kinematic model is established according to the basic model and the physical model of the robot, in this example, a mecanum wheel chassis is selected for kinematic analysis, and finally, the following expression is obtained:
Figure BDA0003665545060000051
Figure BDA0003665545060000052
Figure BDA0003665545060000053
the general chassis motion expressions expressed by the above formulas, including linear motion and rotational expressions, are obtained, and the relationship between the wheel rotation speed and the vehicle body motion speed is described. Wherein ω is 1 、ω 2 、ω 3 、ω 4 Angular velocities, R, of the four wheels, front right, front left, rear right, and rear left, respectively ω Is the dynamic radius of the wheels of the robot, and is the radius under the condition of considering the deformation of the rubber roller:
R ω =R×0.98
and b is the distance from the axle center of the wheel to the horizontal symmetry line of the vehicle body, if the distance is from the center of the wheel hub to the vertical symmetry line of the vehicle body. The rigid body model of the robot chassis can be displaced at any angular velocity and linear velocity in a free plane by the rotation speed matching of the four wheels.
And 5: the relationship between the signal output by the main control board and the actual rotating speed of the motor is determined according to the motor driving chip, the mode of the driving signal and the actual model of the driving motor, and the determination is shown in fig. 2 and table 1. The output signal changes the driving voltage of the motor chip, the driving currents with different sizes are finally output to the motor, and the driving voltage and the driving current of the motor chip do not linearly correspond due to the factors of the built-in amplifying circuit. The pwm signal measured in the example is used in relation to the motor idling speed for the subsequent regulation during the motor control program.
Figure BDA0003665545060000054
Figure BDA0003665545060000061
Table 1: motor measurement curves and results
Step 6: through the python program, real-time data of imu is read out based on a device mounting mode under the ic communication protocol as shown in fig. 3. The sample is read, for example, in fig. 4.
And 7: the Python is used for reading the real-time measured value of the axial acceleration and the real-time measured value of the angle, and the displacement of the robot chassis can be obtained through acceleration signal integration, so that the displacement is used for motion quantity control.
Given the acceleration timing signal a (t), integrating the acceleration:
Figure BDA0003665545060000062
further integration of the velocity can yield:
Figure BDA0003665545060000063
thus, real-time displacement data is obtained from the real-time accelerometer data, and some detail correction is needed for s (t) in practical use.
Correction 1: considering the phenomena of zero drift and the like of the readings of the accelerometer during the movement, the present example uses the mode of averaging the parallel arrangement of two sensors as the actual reading a' of the IMU:
Figure BDA0003665545060000071
according to the integration additivity:
Figure BDA0003665545060000072
then there are:
Figure BDA0003665545060000073
and (3) correction 2: considering that the accelerometer does not necessarily have an initial value of acceleration of 0 due to various factors, the actual acceleration a "should be an increment relative to the initial acceleration:
a″(t)=a′(t)-a′(0)
then there are:
Figure BDA0003665545060000074
and (3) correction: since the acceleration signal is acquired by periodically reading the program, the acceleration function a (t) is not a continuous function but a discrete point, and therefore, the integral calculation needs to be performed by a numerical integration method. The numerical calculation is carried out based on the Longeberg product-solving formula and the Rickett extrapolation acceleration method in the embodiment:
Figure BDA0003665545060000075
Figure BDA0003665545060000076
Figure BDA0003665545060000077
Figure BDA0003665545060000078
wherein S is the result of the integration; correspond toSpeed and displacement, h is the step length, the time acquisition step length, and m is the correction times, which represent the correction times of the richard extrapolation method; s 2n Is the result of an integral calculation using the trapezoidal method, S m (h) Is the final result of fine adjustment of the trapezoid by using the sampling points. Obtaining the acceleration time discrete sequence a (t) by numerical integration n ) And shifting the time discrete sequence
Figure BDA0003665545060000079
The relationship between them.
And 8: a PID control system and an algorithm flow as shown in fig. 5 are established according to the robot electrical control characteristic, the power characteristic and the perception characteristic. Taking forward L (0) as an example, for simplification of the formula, PID is given in the following control steps:
Figure BDA0003665545060000081
step 1, a distance L needing to be advanced is given by path coordinates, and assuming that the distance is in the x axial direction, the length between the ith point and the (i-1) th point solved by the RRT algorithm is as follows:
L(0)=RRT[i][x]-RRT[i-1][x]
step 2, calculating an error value of control at the time t:
e(t)=L(0)-L(t-1)
and 3, generating a PID regulation result:
L(t)=PID[e(t)]
and 4, generating a motor control command according to the PID result, wherein the motor control command comprises the current advance speed and the delay time:
(V(t),τ(t))←L(t)
note: and V (t), tau (t) refers to the moving speed and the predicted rotating time input to the chassis control program at the time t, and the program is controlled according to the two parameters.
And step 5, determining the rotating speed and the required pwm duty ratio according to the motor driving parameter characteristic curve determined by 3.4.2, and inputting the predicted rotating time length into the program:
ω(t)←V(t)
PWM(t)←ω(t)
delay(t)←τ(t)
and 6, the chassis moves according to the input PWM and delay:
(V x ,V y ,V ω )=Mecanum(ω(t),τ(t))
note that: mecanum refers to the previously derived kinetic equation
And 7, moving the chassis, generating a new accelerometer acquisition value, reading the new accelerometer acquisition value by ic, and then sending the new accelerometer acquisition value into a comparator:
if:e(t)<{e set get out of the cycle
else: return to step 1
And step 9: and (3) calculating the displacement in the x direction and the y direction by subtracting the ith point and the i +1 coordinates in the list Reiceve:
X(0)=|x i -x i |
Y(0)=|y i -y i |
then the chassis is controlled to move from the ith point to the (i + 1) th point by using X (0) and Y (0) to carry into L (0) in step 8.
Step 10: step 9 is repeated until all the points are accessed by the robot. So far, two motion decision tasks of motion path planning and local motion control are respectively operated at the computer cloud end and the robot edge end. The complete decision-making framework finally realized is that the cloud plans a route according to a unique map stored in a cloud database and sends the route to the robot, and the robot only needs to operate according to the route and control in a local link. In the robot cluster environment, the waste of computing power and energy consumption caused by storing a map and planning a route by each robot is reduced. And reduces the requirements on the robot so that more and more important edge calculation force can be utilized.
The foregoing shows and describes the general principles, essential features, and advantages of the invention. It will be understood by those skilled in the art that the present invention is not limited to the embodiments described above, and the preferred embodiments of the present invention are described in the above embodiments and the description, and are not intended to limit the present invention. The scope of the invention is defined by the appended claims and equivalents thereof.

Claims (4)

1. A two-stage robot motion decision technology framework based on cloud edge collaboration is characterized in that: the method comprises the following two links:
and (3) link 1: planning a motion path of the robot, namely, determining a starting point and an ending point of the robot and then adopting which method the robot should move;
and (2) link: how to control the four motors of the robot to enable the chassis to move according to the planned path.
2. The cloud-edge-collaboration-based two-stage robot motion decision technology framework as claimed in claim 1, wherein: the route planning work is carried out by improving an RRT (random search tree) algorithm, the core of the route planning work is that a plurality of random coordinate points Qrandom are generated in a map containing a starting point Qint, an end point Qend and an impenetrable obstacle area b (x, y) by random strategy probability p, and finally a feasible route l is formed by connecting, and the probability decision function is represented as follows:
Figure FDA0003665545050000011
wherein dst set The method is characterized in that the method is a critical distance manually set during algorithm solving to judge whether Qend is approached or not, and if the Qend is approached, a random strategy is changed to accelerate the approach to a target point.
3. The cloud-edge-collaboration-based two-stage robot motion decision technology framework as claimed in claim 1, wherein: the motor control is realized by combining an IMU sensor with a PID (proportion integration differentiation) to achieve a motion effect, and a PID expression taking e (t) as an error is as follows:
Figure FDA0003665545050000012
4. the cloud-edge collaboration based two-stage robot motion decision technology framework of claim 1, wherein: an improved numerical integration method is adopted to convert an acceleration value acquired by an IMU into displacement and measure an error e (t) of PID, and a numerical expression from the acceleration to the displacement is as follows:
Figure FDA0003665545050000013
Figure FDA0003665545050000014
Figure FDA0003665545050000015
Figure FDA0003665545050000021
wherein S and V are the result of the product, corresponding displacement and velocity, h is the step length, corresponding time acquisition step length, m is the number of corrections, representing the number of corrections of the Richards extrapolation; s 2n Is the result of an integral calculation using the trapezoidal method, S m (h) The final result of the fine adjustment of the trapezoid by using the sampling points is obtained by numerical integration to obtain an acceleration time sequence a (t) and a displacement time sequence S (t).
CN202210584920.7A 2022-05-27 2022-05-27 Cloud edge cooperation-based two-stage robot motion decision system Active CN114952839B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202210584920.7A CN114952839B (en) 2022-05-27 2022-05-27 Cloud edge cooperation-based two-stage robot motion decision system

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202210584920.7A CN114952839B (en) 2022-05-27 2022-05-27 Cloud edge cooperation-based two-stage robot motion decision system

Publications (2)

Publication Number Publication Date
CN114952839A true CN114952839A (en) 2022-08-30
CN114952839B CN114952839B (en) 2024-02-06

Family

ID=82955926

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202210584920.7A Active CN114952839B (en) 2022-05-27 2022-05-27 Cloud edge cooperation-based two-stage robot motion decision system

Country Status (1)

Country Link
CN (1) CN114952839B (en)

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20110035087A1 (en) * 2009-08-10 2011-02-10 Samsung Electronics Co., Ltd. Method and apparatus to plan motion path of robot
CN109262584A (en) * 2018-11-20 2019-01-25 钟祥博谦信息科技有限公司 A kind of intelligent miniature robot
CN112631173A (en) * 2020-12-11 2021-04-09 中国人民解放军国防科技大学 Brain-controlled unmanned platform cooperative control system
CN112987763A (en) * 2021-05-11 2021-06-18 南京理工大学紫金学院 ROS-based intelligent trolley of autonomous navigation robot control system
CN114296440A (en) * 2021-09-30 2022-04-08 中国航空工业集团公司北京长城航空测控技术研究所 AGV real-time scheduling method integrating online learning
CN114355885A (en) * 2021-12-03 2022-04-15 中国信息通信研究院 Cooperative robot carrying system and method based on AGV
CN114473998A (en) * 2022-01-14 2022-05-13 浙江工业大学 Intelligent service robot system capable of automatically opening door

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20110035087A1 (en) * 2009-08-10 2011-02-10 Samsung Electronics Co., Ltd. Method and apparatus to plan motion path of robot
CN109262584A (en) * 2018-11-20 2019-01-25 钟祥博谦信息科技有限公司 A kind of intelligent miniature robot
CN112631173A (en) * 2020-12-11 2021-04-09 中国人民解放军国防科技大学 Brain-controlled unmanned platform cooperative control system
CN112987763A (en) * 2021-05-11 2021-06-18 南京理工大学紫金学院 ROS-based intelligent trolley of autonomous navigation robot control system
CN114296440A (en) * 2021-09-30 2022-04-08 中国航空工业集团公司北京长城航空测控技术研究所 AGV real-time scheduling method integrating online learning
CN114355885A (en) * 2021-12-03 2022-04-15 中国信息通信研究院 Cooperative robot carrying system and method based on AGV
CN114473998A (en) * 2022-01-14 2022-05-13 浙江工业大学 Intelligent service robot system capable of automatically opening door

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
ZHIJIA CHEN: "Intelligent Cloud Training System based on Edge Computing and Cloud Computing", 《2020 IEEE 4TH INFORMATION TECHNOLOGY, NETWORKING, ELECTRONIC AND AUTOMATION CONTROL CONFERENCE (ITNEC 2020)》 *

Also Published As

Publication number Publication date
CN114952839B (en) 2024-02-06

Similar Documents

Publication Publication Date Title
WO2021175313A1 (en) Automatic driving control method and device, vehicle, and storage medium
Dai et al. Modeling vehicle interactions via modified LSTM models for trajectory prediction
CN109885049B (en) Automatic mapping and path matching method for laser-guided AGV (automatic guided vehicle) based on dead reckoning
Xiong et al. Application improvement of A* algorithm in intelligent vehicle trajectory planning
CN105760954A (en) Parking system path planning method based on improved ant colony algorithm
CN104914865A (en) Transformer station inspection tour robot positioning navigation system and method
CN111258218B (en) Intelligent vehicle path tracking method based on maximum correlation entropy criterion
CN113415288A (en) Sectional type longitudinal vehicle speed planning method, device, equipment and storage medium
CN114879687A (en) Intelligent control method for unmanned logistics vehicle
CN113515117A (en) Conflict resolution method for multi-AGV real-time scheduling based on time window
CN113515111B (en) Vehicle obstacle avoidance path planning method and device
Fu et al. Collision-free and kinematically feasible path planning along a reference path for autonomous vehicle
WO2022252390A1 (en) Error compensation method and apparatus, computer device, and storage medium
CN114952839A (en) Cloud edge cooperation-based two-stage robot motion decision technology framework
Chen et al. A robust trajectory planning method based on historical information for autonomous vehicles
CN113341999A (en) Forklift path planning method and device based on optimized D-x algorithm
CN115525054B (en) Method and system for controlling tracking of edge path of unmanned sweeper in large industrial park
CN113296515A (en) Explicit model prediction path tracking method for double-independent electrically-driven vehicle
CN113650622B (en) Vehicle speed track planning method, device, equipment and storage medium
CN116206447A (en) Intelligent network-connected vehicle intersection ecological driving control method
CN115981314A (en) Robot navigation automatic obstacle avoidance method and system based on two-dimensional laser radar positioning
CN115454053A (en) Automatic guided vehicle control method, system and device and computer equipment
CN113625597A (en) Simulated vehicle control method and device, electronic equipment and storage medium
Sviatov et al. Approach of Trajectory Generation Based on Waypoint Information for a Highly Automated Vehicle
CN219948102U (en) Two-wheel AGV positioning device combining two-dimensional code and encoder and logistics transportation system

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant