[go: up one dir, main page]
More Web Proxy on the site http://driver.im/

CN114489077A - Robot navigation method and device and robot - Google Patents

Robot navigation method and device and robot Download PDF

Info

Publication number
CN114489077A
CN114489077A CN202210095419.4A CN202210095419A CN114489077A CN 114489077 A CN114489077 A CN 114489077A CN 202210095419 A CN202210095419 A CN 202210095419A CN 114489077 A CN114489077 A CN 114489077A
Authority
CN
China
Prior art keywords
robot
path
road sign
indication information
information
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202210095419.4A
Other languages
Chinese (zh)
Inventor
卢鹰
梁朋
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Uditech Co Ltd
Original Assignee
Uditech Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Uditech Co Ltd filed Critical Uditech Co Ltd
Priority to CN202210095419.4A priority Critical patent/CN114489077A/en
Publication of CN114489077A publication Critical patent/CN114489077A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G05CONTROLLING; REGULATING
    • G05DSYSTEMS FOR CONTROLLING OR REGULATING NON-ELECTRIC VARIABLES
    • G05D1/00Control of position, course, altitude or attitude of land, water, air or space vehicles, e.g. using automatic pilots
    • G05D1/02Control of position or course in two dimensions
    • G05D1/021Control of position or course in two dimensions specially adapted to land vehicles
    • G05D1/0231Control of position or course in two dimensions specially adapted to land vehicles using optical position detecting means
    • G05D1/0234Control of position or course in two dimensions specially adapted to land vehicles using optical position detecting means using optical markers or beacons
    • G05D1/0236Control of position or course in two dimensions specially adapted to land vehicles using optical position detecting means using optical markers or beacons in combination with a laser
    • GPHYSICS
    • G05CONTROLLING; REGULATING
    • G05DSYSTEMS FOR CONTROLLING OR REGULATING NON-ELECTRIC VARIABLES
    • G05D1/00Control of position, course, altitude or attitude of land, water, air or space vehicles, e.g. using automatic pilots
    • G05D1/02Control of position or course in two dimensions
    • G05D1/021Control of position or course in two dimensions specially adapted to land vehicles
    • G05D1/0212Control of position or course in two dimensions specially adapted to land vehicles with means for defining a desired trajectory
    • G05D1/0214Control of position or course in two dimensions specially adapted to land vehicles with means for defining a desired trajectory in accordance with safety or protection criteria, e.g. avoiding hazardous areas
    • GPHYSICS
    • G05CONTROLLING; REGULATING
    • G05DSYSTEMS FOR CONTROLLING OR REGULATING NON-ELECTRIC VARIABLES
    • G05D1/00Control of position, course, altitude or attitude of land, water, air or space vehicles, e.g. using automatic pilots
    • G05D1/02Control of position or course in two dimensions
    • G05D1/021Control of position or course in two dimensions specially adapted to land vehicles
    • G05D1/0212Control of position or course in two dimensions specially adapted to land vehicles with means for defining a desired trajectory
    • G05D1/0221Control of position or course in two dimensions specially adapted to land vehicles with means for defining a desired trajectory involving a learning process
    • GPHYSICS
    • G05CONTROLLING; REGULATING
    • G05DSYSTEMS FOR CONTROLLING OR REGULATING NON-ELECTRIC VARIABLES
    • G05D1/00Control of position, course, altitude or attitude of land, water, air or space vehicles, e.g. using automatic pilots
    • G05D1/02Control of position or course in two dimensions
    • G05D1/021Control of position or course in two dimensions specially adapted to land vehicles
    • G05D1/0212Control of position or course in two dimensions specially adapted to land vehicles with means for defining a desired trajectory
    • G05D1/0223Control of position or course in two dimensions specially adapted to land vehicles with means for defining a desired trajectory involving speed control of the vehicle
    • GPHYSICS
    • G05CONTROLLING; REGULATING
    • G05DSYSTEMS FOR CONTROLLING OR REGULATING NON-ELECTRIC VARIABLES
    • G05D1/00Control of position, course, altitude or attitude of land, water, air or space vehicles, e.g. using automatic pilots
    • G05D1/02Control of position or course in two dimensions
    • G05D1/021Control of position or course in two dimensions specially adapted to land vehicles
    • G05D1/0231Control of position or course in two dimensions specially adapted to land vehicles using optical position detecting means
    • G05D1/0238Control of position or course in two dimensions specially adapted to land vehicles using optical position detecting means using obstacle or wall sensors
    • G05D1/024Control of position or course in two dimensions specially adapted to land vehicles using optical position detecting means using obstacle or wall sensors in combination with a laser
    • GPHYSICS
    • G05CONTROLLING; REGULATING
    • G05DSYSTEMS FOR CONTROLLING OR REGULATING NON-ELECTRIC VARIABLES
    • G05D1/00Control of position, course, altitude or attitude of land, water, air or space vehicles, e.g. using automatic pilots
    • G05D1/02Control of position or course in two dimensions
    • G05D1/021Control of position or course in two dimensions specially adapted to land vehicles
    • G05D1/0231Control of position or course in two dimensions specially adapted to land vehicles using optical position detecting means
    • G05D1/0242Control of position or course in two dimensions specially adapted to land vehicles using optical position detecting means using non-visible light signals, e.g. IR or UV signals
    • GPHYSICS
    • G05CONTROLLING; REGULATING
    • G05DSYSTEMS FOR CONTROLLING OR REGULATING NON-ELECTRIC VARIABLES
    • G05D1/00Control of position, course, altitude or attitude of land, water, air or space vehicles, e.g. using automatic pilots
    • G05D1/02Control of position or course in two dimensions
    • G05D1/021Control of position or course in two dimensions specially adapted to land vehicles
    • G05D1/0276Control of position or course in two dimensions specially adapted to land vehicles using signals provided by a source external to the vehicle
    • G05D1/0285Control of position or course in two dimensions specially adapted to land vehicles using signals provided by a source external to the vehicle using signals transmitted via a public communication network, e.g. GSM network

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Aviation & Aerospace Engineering (AREA)
  • Radar, Positioning & Navigation (AREA)
  • Remote Sensing (AREA)
  • General Physics & Mathematics (AREA)
  • Automation & Control Theory (AREA)
  • Electromagnetism (AREA)
  • Optics & Photonics (AREA)
  • Control Of Position, Course, Altitude, Or Attitude Of Moving Bodies (AREA)

Abstract

The application relates to the technical field of robots and discloses a robot navigation method, a device and a robot, wherein the robot navigation method comprises the following steps: judging whether a robot road sign is detected or not in the process of robot traveling; if the robot road sign is detected, first indication information indicated by the robot road sign is acquired; replanning the path according to the first indication information to obtain a replanned target path; and controlling the robot to continue to travel based on the target path. Through detecting the robot road sign, the first indication information indicated by the robot road sign is obtained to replan the path and control the robot to continue to advance based on the replanned target path.

Description

Robot navigation method and device and robot
Technical Field
The embodiment of the application relates to the technical field of robots, in particular to a robot navigation method, a device and a robot.
Background
Robots, which are the common name for automatically controlled machines, include all machines that simulate human behavior or thought and simulate other creatures. With the development of technology and the improvement of living standard of people, robots such as cleaning robots, service robots, remote monitoring robots, floor sweeping robots and the like gradually enter the lives of people.
At present, many robots have the capabilities of autonomous action and autonomous path planning, and the robots can start from a starting point to a destination through local or cloud navigation information. Most of the existing robots perform positioning navigation based on a Simultaneous positioning and Mapping (SLAM) technology, and travel along a planned path after planning the path by a static map. However, during the course of the robot travel, if an emergency is encountered, for example: when encountering an obstacle, the robot performs local path planning to bypass the obstacle, and returns to the original path to continue walking.
And local path planning needs to assist in actions such as obstacle avoidance, collision prevention, intersection rule compliance and the like under the condition of large path determination by utilizing own vision and/or radar and other capabilities, and under a complex scene, the calculation requirement on the robot is high, the processing time is long, and the walking efficiency of the robot is not high.
Disclosure of Invention
In order to solve the above technical problem, embodiments of the present application provide a robot navigation method, an apparatus, and a robot, so as to improve the traveling efficiency of the robot.
In order to solve the above technical problem, an embodiment of the present application provides the following technical solutions:
in a first aspect, an embodiment of the present application provides a robot navigation method, including:
judging whether a robot road sign is detected or not in the process of robot traveling;
if the robot road sign is detected, first indication information indicated by the robot road sign is acquired;
replanning the path according to the first indication information to obtain a replanned target path;
and controlling the robot to continue to travel based on the target path.
In some embodiments, the first indication information includes traffic information, temporary no-traffic information, or permanent no-traffic information, and the re-planning the path according to the first indication information to obtain the re-planned target path includes:
if the first indication information is the traffic information, planning a first path, and taking the first path as a target path, wherein the first path comprises a road section corresponding to the traffic information;
if the first indication information is temporary no-pass information, planning a second path, taking the second path as a target path, and executing corresponding actions according to subsequently acquired information indicated by the robot road sign;
if the first indication information is the long-term no-entry information, a third path is planned and taken as a target path, wherein the third path does not comprise a road section corresponding to the long-term no-entry information.
In some embodiments, the traffic information includes a first road segment and a direction of travel;
planning a first path, comprising:
acquiring a global map;
planning a plurality of sub-paths along the traveling direction between the current position of the robot and the target position to be reached by the robot according to the global map;
and determining a target path from the plurality of sub-paths according to the first road section, wherein the target path passes through the first road section.
In some embodiments, planning a second path, taking the second path as a target path, and executing corresponding actions according to subsequently acquired information indicated by the robot landmark includes:
planning a second path according to queuing position information sent by the robot road sign to enable the robot to enter a queuing waiting mode;
and when the robot is in a queuing waiting mode, continuously acquiring second indication information indicated by the robot road sign, and sequentially passing according to the second indication information, wherein the second indication information is queuing information for sequentially passing the robot.
In some embodiments, the robot landmark includes a first communication module and the robot includes a second communication module, and determining whether the robot landmark is detected during the travel of the robot includes:
when the distance between the robot and the robot road sign is smaller than a first distance threshold value, a first communication module of the robot road sign is communicated with a second communication module of the robot;
and when the identification signal sent by the first communication module of the robot road sign is received, determining that the robot road sign is detected.
In some embodiments, the robot further comprises a camera unit;
if the robot road sign is detected, first indication information indicated by the robot road sign is acquired, and the first indication information comprises:
if the robot road sign is detected according to the received identification signal, acquiring a road sign image of the robot road sign through a camera unit;
according to the road sign image, first indication information indicated by the road sign of the robot is obtained.
In some embodiments, acquiring first indication information indicated by the robot landmark according to the landmark image includes:
extracting direction marks and/or text contents displayed on the road sign images;
and determining first indication information indicated by the robot signposts according to the extracted direction identifications and/or text contents.
In some embodiments, the method further comprises:
acquiring first stay time of the robot in a preset range of a robot road sign;
if the first residence time is larger than a first time threshold value, determining that the robot is in danger;
after the robot is determined to be in danger, acquiring second stay time of the robot within a preset range of a robot road sign;
and if the second retention time is greater than the second time threshold, determining that the robot cannot safely escape, and sending a distress signal to a terminal in communication connection with the robot so as to inform a technician corresponding to the terminal to assist in escaping.
In a second aspect, an embodiment of the present application provides a robot navigation device, including:
the judging unit is used for judging whether a robot road sign is detected or not in the advancing process of the robot;
the acquisition unit is used for acquiring first indication information indicated by the robot road sign if the robot road sign is detected;
the planning unit is used for replanning the path according to the first indication information so as to obtain a replanned target path;
and the traveling unit is used for controlling the robot to continue traveling based on the target path.
In a third aspect, an embodiment of the present application provides a robot, including:
at least one processor; and
a memory communicatively coupled to the at least one processor; wherein,
the memory stores instructions executable by the at least one processor to enable the at least one processor to perform a robot navigation method as in the first aspect.
In a fourth aspect, embodiments of the present application provide a non-transitory computer-readable storage medium storing computer-executable instructions for causing a robot to perform the robot navigation method according to the first aspect.
The beneficial effects of the embodiment of the application are that: in contrast to the prior art, an embodiment of the present application provides a robot navigation method, including: judging whether a robot road sign is detected or not in the process of robot traveling; if the robot road sign is detected, first indication information indicated by the robot road sign is acquired; replanning the path according to the first indication information to obtain a replanned target path; and controlling the robot to continue to travel based on the target path. Through detecting the robot road sign, the first indication information indicated by the robot road sign is obtained to replan the path and control the robot to continue to advance based on the replanned target path.
Drawings
One or more embodiments are illustrated in drawings corresponding to, and not limiting to, the embodiments, in which elements having the same reference number designation may be represented as similar elements, unless specifically noted, the drawings in the figures are not to scale.
FIG. 1 is a schematic diagram of an application environment provided by an embodiment of the present application;
fig. 2 is a schematic flowchart of a robot navigation method according to an embodiment of the present disclosure;
FIG. 3 is a detailed flowchart of step S204 in FIG. 2;
FIG. 4 is a schematic diagram of a robot navigation travel provided by an embodiment of the present application;
fig. 5 is a schematic flowchart of planning a first path according to an embodiment of the present disclosure;
fig. 6 is a schematic flowchart of planning a second path according to an embodiment of the present application;
fig. 7 is a schematic flow chart of a robot for escaping from a trouble according to an embodiment of the present disclosure;
fig. 8 is an interaction sequence diagram of a server, a robot, and a mobile terminal according to an embodiment of the present application;
FIG. 9 is a schematic diagram of a robot communicating with a robot landmark according to an embodiment of the present disclosure;
fig. 10 is a schematic structural diagram of a robot navigation device according to an embodiment of the present application;
fig. 11 is a schematic structural diagram of a robot according to an embodiment of the present application.
Detailed Description
To facilitate an understanding of the present application, the present application is described in more detail below with reference to the accompanying drawings and detailed description. It will be understood that when an element is referred to as being "secured to" another element, it can be directly on the other element or intervening elements may also be present. When an element is referred to as being "connected" to another element, it can be directly connected to the other element or intervening elements may be present. The terms "vertical," "horizontal," "left," "right," and the like as used herein are for descriptive purposes only.
Unless defined otherwise, all technical and scientific terms used herein have the same meaning as commonly understood by one of ordinary skill in the art to which this application belongs. The terminology used in the description of the present application is for the purpose of describing particular embodiments only and is not intended to be limiting of the present application. As used herein, the term "and/or" includes any and all combinations of one or more of the associated listed items.
Before the present application is explained in detail, terms and expressions referred to in the embodiments of the present application are explained, and the terms and expressions referred to in the embodiments of the present application are applicable to the following explanations:
(1) a robot landmark refers to a kind of mark or object for recognition by a robot. The robot landmark includes entity devices such as a sign and a landmark for displaying specific information, for example: the traffic sign also includes a screen pattern, a screen mark, and other virtual devices for displaying specific information.
The technical scheme of the application is specifically explained in the following by combining the drawings in the specification:
referring to fig. 1, fig. 1 is a schematic diagram of an application environment according to an embodiment of the present disclosure;
as shown in fig. 1, the application environment 100 includes: the robot system comprises a robot 10, a mobile terminal 20 and a server 30, wherein the robot 10, the mobile terminal 20 and the server 30 are in communication connection with each other through a network, and the network comprises a wired network and/or a wireless network. It is understood that the network includes 2G, 3G, 4G, 5G, wireless lan, bluetooth, etc., and may also include a serial port line, network line, etc., wired network.
In the present embodiment, the robot 10 includes a mobile robot, for example: robots such as cleaning robots, pet robots, transfer robots, nursing robots, remote monitoring robots, sweeping robots, and the like. The cleaning robot includes, but is not limited to, a sweeping robot, a dust collecting robot, a mopping robot, or a floor washing robot.
The robot comprises a main body, a driving wheel component, a camera unit, a laser radar, a communication module and a controller. The body may be generally oval, triangular, D-shaped or otherwise shaped in profile. The controller is arranged in the main body, the driving wheel component is arranged in the main body and used for driving the robot to move, if the robot is a cleaning robot, the driving wheel component drives the robot to move on a surface to be cleaned, wherein the surface to be cleaned can be a smooth floor surface, a surface paved with a carpet and other surfaces needing to be cleaned.
In this embodiment, the driving wheel assembly includes a left driving wheel, a right driving wheel, and an omni wheel, and the left driving wheel and the right driving wheel are respectively installed at opposite sides of the main body. The omniwheel is installed in the position near the front of the bottom of main part, and the omniwheel is the activity truckle, can 360 degrees rotations of level to make the robot can turn to in a flexible way. The left driving wheel, the right driving wheel and the omnidirectional wheel are arranged to form a triangle, so that the walking stability of the robot is improved.
In the embodiment of the application, the camera unit is arranged on the body of the robot and used for acquiring image data and/or video data. The camera unit is in communication connection with the controller, and is used for acquiring image data and/or video data within the coverage area of the camera unit, for example: the method comprises the steps of obtaining image data and/or video data in a certain closed space, or obtaining image data and/or video data in a certain open space, and sending the obtained image data and/or video data to a controller. In the embodiment of the present application, the camera unit includes, but is not limited to, an infrared camera, a night vision camera, a webcam, a digital camera, a high definition camera, a 4K camera, an 8K high definition camera, and other camera devices.
In this application embodiment, the laser radar is connected to the controller in communication, and the laser radar is arranged on the body of the robot, for example: the laser radar is arranged on a moving chassis of a robot body and used for acquiring laser point cloud data. Specifically, laser radar is used for acquireing the laser point cloud data in the monitoring range, and the removal chassis of the fuselage of robot is provided with communication module, and the laser point cloud data that laser radar acquireed are sent the controller through communication module. In the embodiment of the application, the laser radar includes a pulse laser radar, a continuous wave laser radar and other radars, and the mobile chassis includes an all-purpose universal chassis, an arch-type mobile chassis and other robot mobile chassis.
In the embodiment of the present application, the communication module is communicatively connected to the mobile terminal and the server, and is configured to receive data sent by the mobile terminal and the server, for example: receiving an environment map sent by a server; or, send data to the mobile terminal and the server, for example: and (4) routing information to a server. In the embodiment of the present application, the communication module may implement communication with the internet and the internet, wherein the communication module includes, but is not limited to, a WIFI module, a ZigBee module, an NB _ IoT module, a 4G module, a 5G module, a bluetooth module, and other communication units.
In the embodiment of the application, the controller is arranged inside the main body, and the controller is electrically connected with the left driving wheel, the right driving wheel and the omnidirectional wheel respectively. The controller is used as a control core of the robot and is used for controlling the robot to walk, retreat and some business logic processing. For example: the controller is used for receiving the image data and/or the video data sent by the camera shooting unit, receiving the laser point cloud data sent by the laser radar, and constructing an environment map according to the laser point cloud data. The controller calculates laser point cloud data of the monitored area through a synchronous positioning and Mapping (SLAM) technology, namely a laser SLAM algorithm, so as to construct an environment map. In the embodiment of the application, the laser SLAM algorithm comprises Kalman filtering, particle filtering and graph optimization methods.
In embodiments of the present application, the controller may be a general purpose processor, a Digital Signal Processor (DSP), an Application Specific Integrated Circuit (ASIC), a Field Programmable Gate Array (FPGA), a single chip, an arm (acorn RISC machine) or other programmable logic device, discrete gate or transistor logic, discrete hardware components, or any combination of these components. The controller may be any conventional processor, controller, microcontroller, or state machine. A controller may also be implemented as a combination of computing devices, e.g., a combination of a DSP and a microprocessor, a plurality of microprocessors, one or more microprocessors in conjunction with a DSP, and/or any other such configuration, or one or more combinations of Micro Control Units (MCUs), Field-Programmable Gate arrays (FPGAs), System-on-a-Chip (SoC).
It is understood that the robot 10 in the embodiment of the present application further includes a storage module, which includes but is not limited to: one or more of FLASH memory, NAND FLASH memory, vertical NAND FLASH memory (VNAND), NOR FLASH memory, Resistive Random Access Memory (RRAM), Magnetoresistive Random Access Memory (MRAM), Ferroelectric Random Access Memory (FRAM), spin transfer torque random access memory (STT-RAM), and the like.
In the embodiment of the present application, during the movement of the robot 100, the controller uses a Simultaneous Localization and Mapping (SLAM) technique, that is, a laser SLAM algorithm, to perform Localization and navigation, and construct a map and a location according to the environmental data.
In the embodiment of the present application, the mobile terminal 20 is communicatively connected to the robot 10, and is configured to send a control instruction to the robot 10, or receive path information sent by the robot 10, so as to present the path information and a related image of the robot 10 on a screen of the mobile terminal 20, so as to monitor a traveling process of the robot. Wherein, the mobile terminal 20 is installed with an application APP, through which a user can send a control command to the robot 10 to control the state of the robot 10. The mobile terminal 20 includes, but is not limited to: mobile communication devices, mobile personal computer devices, portable entertainment devices, or other electronic devices having video playback and internet access capabilities.
In the embodiment of the present application, the server 30, communicatively connected to the robot 10 and the mobile terminal 20, is configured to send map information to the robot 10 and/or the mobile terminal 20, such as: the robot 10 performs path planning based on an environment map of an environment in which the robot is located, where the environment map includes a street map, a road planning map, and the like. The number of the servers 30 is plural, and plural servers may constitute a server cluster, for example: the server cluster includes: the first server, the second server, …, the nth server, or the server cluster may be a cloud computing service center including a number of servers. The server in the embodiment of the present application includes but is not limited to: the system comprises a tower server, a rack server, a blade server and a cloud server. Preferably, the server is a cloud server (ECS).
Referring to fig. 2, fig. 2 is a schematic flowchart illustrating a robot navigation method according to an embodiment of the present disclosure;
the robot navigation method is applied to a robot, and comprises the following steps: the execution subject of the robot navigation method is one or more processors of the robot.
As shown in fig. 2, the robot navigation method includes:
step S201: the robot is in the process of advancing;
specifically, the robot receives a travel instruction sent by the mobile terminal, and travels along an original path according to the travel instruction, wherein the travel instruction includes path information of the original path. After the controller of the robot receives the traveling command, the controller analyzes the traveling command to obtain path information of the original path, and controls the robot to start from the current position and enter a traveling process based on the path information, wherein the path information comprises starting point information, end point information and path direction information, and at the moment, the controller controls the robot to travel according to the current position, the end point information and the path direction information of the robot so as to enable the robot to travel towards the end point.
Step S202: judging whether a robot road sign is detected;
specifically, the robot road sign is for being used for sign or the object by robot discernment, and the robot road sign includes first communication module, and the robot includes second communication module, judges whether detect the robot road sign, includes:
when the distance between the robot and the robot road sign is smaller than a first distance threshold value, a first communication module of the robot road sign is communicated with a second communication module of the robot;
and when the identification signal sent by the first communication module of the robot road sign is received, determining that the robot road sign is detected.
Specifically, if the distance between the robot and the robot road sign is smaller than a first distance threshold, the robot road sign sends an identification signal to the robot, and when the identification signal sent by the first communication module of the robot road sign is received, the robot road sign is determined to be detected.
Or, the robot road sign is a sign used for being recognized by the robot, the robot road sign includes first indication information, the robot acquires a road sign image including the robot road sign through a camera unit disposed on a body of the robot, and performs image recognition on the road sign image to determine whether the robot road sign is detected, if the robot road sign is detected, the method proceeds to step S203: acquiring first indication information indicated by a robot road sign; if the robot landmark is not detected, the process proceeds to step S206: the robot is controlled to continue traveling based on the original path.
In the embodiment of the application, in order to detect the robot landmark more accurately, a robot landmark detection model is trained in advance, wherein the training process comprises:
(1) acquiring an image data set, wherein the image data set includes landmark images of a plurality of robot landmarks, the landmark images are three-channel color images, each landmark image is labeled with a category label, and the category label is used for representing whether the landmark image includes a robot landmark and a category of first indication information corresponding to the robot landmark, for example: labeling the landmark image through a one-hot category labeling algorithm, wherein the landmark image is represented to include the robot landmark or not include the robot landmark through a numerical value of a position corresponding to the robot landmark being 1 or 0.
It will be appreciated that the image dataset may be a color image captured by the camera unit of the robot, or alternatively, the image dataset may be a landmark image contained in a database of the server, without limitation to the source of the image sample.
(2) And constructing a loss function, and training the robot landmark detection model based on the image data set to obtain the trained robot landmark detection model.
Specifically, the embodiment of the application adopts an Adam Algorithm (Adaptive motion Estimation Algorithm) to optimize the model parameters of the robot road sign detection model. For example: the number of iterations is set to 500, the initial learning rate is set to 0.001, the weight attenuation is set to 0.0005, and the learning rate is attenuated to 1/10 for every 50 iterations.
It can be understood that the Adam Algorithm (Adaptive motion Estimation Algorithm, Adam) can be regarded as a combination of the momentum method and the RMSProp Algorithm, and not only uses momentum as a parameter update direction, but also can adaptively adjust the learning rate.
After the robot landmark detection model is trained, the trained robot landmark detection model is called, and landmark images shot by the robot in real time are detected to judge whether a robot landmark is detected.
Step S203: acquiring first indication information indicated by a robot road sign;
the robot further comprises a camera unit, wherein if the robot road sign is detected, first indication information indicated by the robot road sign is acquired, and specifically, if the robot road sign is detected according to the received identification signal, a road sign image of the robot road sign is acquired through the camera unit; according to the road sign image, first indication information indicated by the road sign of the robot is obtained. The first indication information comprises traffic information, temporary traffic prohibition information and permanent traffic prohibition information, wherein the traffic information comprises straight running, left turning and right turning, the temporary traffic prohibition information comprises temporary straight running prohibition, temporary left turning prohibition and temporary right turning prohibition, and the permanent traffic prohibition information comprises permanent straight running prohibition, permanent left turning prohibition and permanent right turning prohibition.
In the embodiment of the application, acquiring first indication information indicated by a robot landmark according to a landmark image includes:
extracting direction marks and/or text contents displayed on the road sign images;
and determining the first indication information indicated by the robot road sign according to the extracted direction identification and/or text content.
It will be appreciated that the roadmap image may be used to provide rich presentation content, such as: the direction identification and/or the text content are beneficial to the direction guidance or path planning of the robot.
Further, in the embodiment of the application, the pre-trained robot landmark detection model is further used for identifying first indication information indicated by the robot landmark.
Specifically, a landmark image including the robot landmark is acquired, and the landmark image is input to a pre-trained robot landmark detection model, so as to obtain first indication information indicated by the robot landmark, for example: the left turn, the right turn, the temporary direct drive prohibition, the temporary left turn prohibition, the temporary right turn prohibition, the permanent direct drive prohibition, the permanent left turn prohibition, and the permanent right turn prohibition.
It can be understood that, in the training process of the robot landmark detection model, the landmark image is labeled by a one-hot category labeling algorithm, for example: the bit number of the label corresponding to each landmark image is a seven-bit binary system, the first bit represents whether the landmark image includes a robot landmark, and the following six bits respectively represent the category of the first indication information corresponding to the robot landmark, for example: [1,1,0,0,0,0,0] indicates that the landmark image includes a robot landmark, and the first indication information corresponding to the robot landmark is a straight line.
Further, in order to shorten the position of the tag, the embodiment of the present application sets the number of bits of the tag equal to the number of categories of the first indication information corresponding to the robot landmark, for example: if the number of the categories of the first indication information corresponding to the robot landmark is 6, setting the number of bits of the tag to 6 bits, for example: [0,0,0,0,0,0] indicates that the landmark image does not include a robot landmark, [1,0,0,0,0,0] indicates that the landmark image includes a robot landmark, and the first indication information corresponding to the robot landmark is a straight line, and so on, so that it can be determined whether the landmark image includes a robot landmark and the first indication information corresponding to the robot landmark.
Step S204: replanning the path according to the first indication information to obtain a replanned target path;
it can be understood that, because the existence of the robot landmark is not considered in the original path planned in advance by the robot, and the difference of the first indication information indicated by the robot landmark may cause a problem that the original path may not be passable, the embodiment of the application needs to re-plan the path according to the first indication information to obtain the re-planned target path.
Specifically, referring to fig. 3 again, fig. 3 is a detailed flowchart of step S204 in fig. 2;
as shown in fig. 3, the step S204: replanning the path according to the first indication information to obtain a replanned target path, comprising:
step S2041: acquiring first indication information;
the first indication information is obtained after the robot carries out image recognition on the road sign image, and comprises traffic information, temporary traffic prohibition information and permanent traffic prohibition information, wherein the traffic information comprises straight traffic, left turning and right turning, the temporary traffic prohibition information comprises temporary straight traffic prohibition information, temporary left turning prohibition information and temporary right turning prohibition information, and the permanent traffic prohibition information comprises permanent straight traffic prohibition information, permanent left turning prohibition information and permanent right turning prohibition information.
Step S2042: if the first indication information is the traffic information, planning a first path, and taking the first path as a target path, wherein the first path comprises a road section corresponding to the traffic information;
specifically, if the first indication information is the traffic information, it means that the road segment corresponding to the traffic information is a passable road segment, and at this time, a first path is planned, where the first path includes the road segment corresponding to the traffic information, for example: if the traffic information is left turn, the first path comprises a road section corresponding to the left turn, the road section corresponding to the left turn is planned in the first path, and the first path is determined as a target path.
Referring to fig. 4, fig. 4 is a schematic diagram illustrating a robot navigation according to an embodiment of the present disclosure;
as shown in fig. 4, in the traveling process of the robot, if the robot detects a robot landmark, the path is re-planned according to first indication information indicated by the robot landmark, so as to obtain a re-planned target path, for example:
if the first indication information comprises the traffic information, planning a first path and taking the first path as a target path;
specifically, please refer to fig. 5 again, fig. 5 is a schematic flow chart of planning a first path according to an embodiment of the present disclosure;
as shown in fig. 5, planning a first path includes:
step S501: acquiring a global map;
specifically, the global map includes an environment map of an environment in which the robot is located, for example: the robot acquires the global map transmitted by the server, or the controller of the robot acquires the global map stored in the storage unit of the robot.
Step S502: planning a plurality of sub-paths along the traveling direction between the current position of the robot and the target position to be reached by the robot according to the global map;
specifically, the traveling direction of the robot is detected by an acceleration sensor provided in the body of the robot, and the traveling direction of the robot can be detected by the acceleration sensor. It is understood that the acceleration sensor is also used to detect the speed and acceleration of the robot.
And according to the global map, determining the current position of the robot and the target position to be reached by the robot, and planning a plurality of sub-paths between the current position and the target position along the traveling direction of the robot, wherein each sub-path starts from the current position and points to the target position.
Step S503: and determining a target path from the plurality of sub-paths according to the first road section, wherein the target path passes through the first road section.
Specifically, the road section corresponding to the traffic information is a first road section, and a target route is determined from a plurality of sub-routes, wherein each sub-route passes through the first road section, for example: and determining a sub-path with the shortest distance from the plurality of sub-paths as a target path, wherein the target path passes through the first road section, or determining a sub-path with the minimum turning direction from the plurality of sub-paths as a target path, wherein the target path passes through the first road section.
In the embodiment of the application, one target path is determined in the plurality of sub-paths, and the distance of the target path is shortest or the turning direction is minimum, so that the path planning of the robot is better restrained, the passing time of the robot is favorably reduced, and the walking efficiency of the robot is improved.
Step S2043: if the first indication information is temporary no-pass information, planning a second path, taking the second path as a target path, and executing corresponding actions according to subsequently acquired information indicated by the robot road sign;
specifically, the temporary no-pass information includes temporary no-go straight, temporary no-left turn, and temporary no-right turn, please refer to fig. 6 again, and fig. 6 is a schematic flow chart for planning the second path according to the embodiment of the present application;
as shown in fig. 6, planning a second path, taking the second path as a target path, and executing corresponding actions according to subsequently acquired information indicated by the robot landmark, including:
step S601: planning a second path according to queuing position information sent by the robot road sign to enable the robot to enter a queuing waiting mode;
it is understood that a robotic landmark may be considered a smart device, such as: and the intelligent terminal is used for orderly releasing the plurality of robots in the queuing waiting mode, so that the intelligent terminal has the function of dredging traffic.
Specifically, the robot signpost communicates with a plurality of robots within a preset range of the robot signpost and sends queuing position information to the robots, so that the robots plan a second path to enter a queuing waiting mode according to the queuing position information sent by the robot signpost, wherein the queuing position information includes position information and serial number information of each robot, and the smaller the serial number information corresponding to a certain robot is, the closer the robot is to the robot signpost, that is, the robot with the smaller serial number information is located in front of the robot with the larger serial number.
Step S602: and when the robot is in a queuing waiting mode, continuously acquiring second indication information indicated by the robot road sign, and sequentially passing according to the second indication information, wherein the second indication information is queuing information for sequentially passing the robot.
Specifically, if a plurality of robots exist in the preset range of the robot road sign, if a certain robot is in a queuing waiting mode, the robot road sign continuously sends second indication information to the robot, so that the plurality of robots sequentially pass according to the second indication information, wherein the second indication information is queuing information for sequentially passing the robots, for example: and the first robot, the second robot and the third robot are all in a queuing waiting mode, wherein the first robot is positioned in front of the second robot, the second robot is positioned in front of the third robot, and second indication information is sent to the second robot and the third robot after the first robot is released by the robot road sign, so that the second robot and the third robot sequentially pass according to the second indication information.
Step S2044: if the first indication information is the long-term no-entry information, planning a third path, and taking the third path as a target path, wherein the third path does not include a road section corresponding to the long-term no-entry information;
specifically, if the first indication information is the long-term no-entry information, it means that the road segment corresponding to the pass information is the no-entry road segment, and at this time, a third path is planned, where the third path does not include the road segment corresponding to the long-term no-entry information, for example: and if the permanent no-pass information is a right turn, the third path does not comprise a road section corresponding to the right turn, at the moment, the next road section is reselected as the traveling path of the robot so as to determine the third path, and the third path is used as the target path.
Referring to fig. 4 again, fig. 4 is a schematic diagram of a robot navigation system according to an embodiment of the present disclosure;
as shown in fig. 4, in the traveling process of the robot, if the robot detects a robot landmark, the path is re-planned according to first indication information indicated by the robot landmark, so as to obtain a re-planned target path, for example:
if the first indication information comprises the long-term no-pass information, planning a third path, and taking the third path as a target path.
In the embodiment of the application, when the robot meets the robot road sign, the path is planned again, and the robot can smoothly pass according to the first indication information indicated by the robot road sign, so that the defects that in the prior art, the environment information is collected in real time, a local map is built, the local path is planned to bypass a barrier are avoided, and the traveling efficiency of the robot is improved. Further, according to the embodiment of the application, different processing modes are adopted for different types of first indication information, and the advancing efficiency of the robot is improved.
Step S205: controlling the robot to continue to travel based on the target path;
specifically, according to first indication information indicated by the robot road sign, a target path is determined, and the robot continues to travel based on the target path, wherein the target path comprises a first path, a second path or a third path.
Step S206: controlling the robot to continue to travel based on the original path;
wherein the original path is determined by the mobile terminal, and the original path is determined by a traveling instruction sent by the mobile terminal to the robot.
It will be appreciated that if the robot does not detect a robot landmark during travel, the robot travels based on the original path until the endpoint is reached.
It can be understood that the robot generally adopts a laser radar positioning and navigation algorithm with better computing efficiency, in the embodiment of the application, the robot is guided to travel by the robot road sign, the penetrability is better, and the algorithm is simple. And the robot can be informed remotely by the robot road sign, for example, the robot can be informed at the position of the intersection, the defect that the route is selected by turning back after entering a roadway is avoided, the defects that the route is perceived only by adopting a camera device to be visible at a close distance and the arithmetic operation amount is large are avoided, and the advancing efficiency of the robot is improved.
Referring to fig. 7, fig. 7 is a schematic flow chart of a robot for escaping from a trouble according to an embodiment of the present disclosure;
the robot escaping flow comprises the following steps:
step S701: acquiring first stay time of the robot in a preset range of a robot road sign;
specifically, the preset range refers to a distance range, for example: within a first distance range from the robot landmark, for example: in a certain direction, a certain robot enters a range of a first distance from a robot road sign, and the staying time of the certain robot is taken as the first staying time of the certain robot in a preset range of the robot road sign.
Step S702: if the first residence time is larger than a first time threshold value, determining that the robot is in danger;
specifically, it is determined whether the first dwell time is greater than a first time threshold. If the first residence time is larger than a first time threshold value, determining that the robot is in danger; and if the first residence time is not greater than the first time threshold value, determining that the robot normally walks. In an embodiment of the present application, the first time threshold is proportional to a size of a preset range and/or a current speed of the robot, where the size of the preset range is characterized by a first distance, the preset range is a range where a circle with a radius and a road sign of the robot are located, for example: the first time threshold is a first distance, a first weight factor + the current speed of the robot, a second weight factor, such as: the first distance is 10m, the first weight coefficient is 5, the current speed of the robot is 1m/s, the second weight coefficient is 50, and the first time threshold is 100 s.
Step S703: after the robot is determined to be in danger, acquiring second stay time of the robot within a preset range of a robot road sign;
specifically, the second stay time is a period of time from when the robot is determined to be in distress until the robot leaves the preset range of the robot road sign.
Step S704: if the second residence time is larger than the second time threshold, determining that the robot cannot safely escape;
specifically, whether the second stay time is larger than a first time threshold value or not is judged, and if yes, the robot is determined not to be safe to escape; and if not, determining that the robot can safely escape. Wherein the second time threshold is positively correlated with the first time threshold, for example: the second time threshold is the first time threshold proportional coefficient, such as: the first time threshold is 100s, the scaling factor is 0.8, and the second time threshold is 80 s.
Step S705: and sending a distress signal to a terminal in communication connection with the robot so as to inform a technician corresponding to the terminal to assist in getting rid of the distress.
Specifically, if the second staying time is longer than the second time threshold, it means that the robot cannot escape from the trouble by itself, and at this time, the robot sends a distress signal to a terminal in communication connection with the robot to notify a technician corresponding to the terminal to assist in escaping from the trouble.
In the embodiment of the application, whether the robot is in danger is judged by setting the first time threshold, whether the robot can get rid of danger by itself is judged by further setting the second time threshold, and then whether a technician is informed is judged, so that the robot cannot advance after being in danger is avoided, and the running safety of the robot is improved.
Referring to fig. 8 again, fig. 8 is an interaction sequence diagram of a server, a robot, and a mobile terminal according to an embodiment of the present disclosure;
as shown in fig. 8, the interactive process of the server, the robot, and the mobile terminal includes:
step S801: sending an environment map;
specifically, the server sends an environment map to the robot, wherein the environment map comprises a street map, a road planning map and the like, and the environment map is used for the robot to navigate. In some embodiments, the server further sends the environment map to the mobile terminal, so that the mobile terminal generates a travel instruction based on the environment map to control the robot to travel.
Step S802: receiving and storing an environment map;
specifically, the robot receives and stores the environment map sent by the server. It can be understood that, since the environment map is continuously updated, the robot stores only the latest environment map, that is, deletes the historical environment map after receiving the current environment map transmitted by the server, thereby saving the storage space of the storage unit of the robot.
Step S803: sending a traveling instruction;
specifically, the mobile terminal transmits a travel instruction to the robot so that the robot receives the travel instruction.
Step S804: analyzing the traveling instruction to obtain an original path;
specifically, the robot receives a travel instruction sent by the mobile terminal, and analyzes the travel instruction to obtain path information of an original path, where the path information of the original path includes start point information, end point information, and path direction information. Further, in order to make the travel of the robot more accurate, the path information of the original path further includes link information, that is, the path information of the original path includes link information of each link, the link information of each link including: position, length, width, etc. of each link.
Step S805: entering a traveling process;
specifically, the controller controls the robot to enter a travel process, such as: and controlling the driving wheel component of the robot to move so as to control the robot to enter a traveling process.
Step S806: and walking based on the environment map and the original path.
Specifically, the robot is controlled to walk based on an environment map and an original path, for example: and controlling the robot to walk to the target position of the robot from the current position of the robot based on the environment map and based on the original path.
In the embodiment of the application, the environment map is sent to the robot through the server, so that the robot receives and stores the environment map, the mobile terminal sends the traveling instruction to the robot, and after the robot receives the traveling instruction, the traveling instruction is analyzed to obtain the original path, so that the robot can walk based on the original path.
In an embodiment of the application, the robot landmark comprises a first communication module, the robot comprises a second communication module, and the method further comprises:
when the distance between the robot and the robot road sign is smaller than a first distance threshold value, a first communication module of the robot road sign is communicated with a second communication module of the robot;
the second communication module of the robot receives the identification signal sent by the first communication module of the robot road sign, and acquires first indication information indicated by the robot road sign according to the identification signal.
Specifically, please refer to fig. 9 again, fig. 9 is a schematic diagram illustrating a communication between a robot and a robot landmark according to an embodiment of the present disclosure;
as shown in fig. 9, the robot landmark 901 includes a first communication module 9011, and the robot 902 includes a second communication module 9021, where the first communication module 9011 and the second communication module 9021 are communicatively connected.
Specifically, the robot includes a camera unit, the robot landmark 901 includes a first communication module 9011, the robot 902 includes a second communication module 9021, and when the distance between the robot 902 and the robot landmark 901 is smaller than a first distance threshold, the first communication module 9011 of the robot landmark 901 communicates with the second communication module 9021 of the robot 902. In this embodiment of the application, the first distance threshold is positively correlated with the signal transmission distance between the first communication module 9011 and the second communication module 9012, for example: the first distance threshold is set to 70% of the maximum signal transmission distance between the first communication module 9011 and the second communication module 9012, for example, if the maximum signal transmission distance is 5m, the first distance threshold is set to 3.5 m.
The second communication module 9021 of the robot 902 receives the identification signal sent by the first communication module 9011 of the robot landmark 901, and acquires first indication information indicated by the robot landmark 901 according to the identification signal.
Specifically, after receiving an identification signal sent by a first communication module of the robot road sign, the camera unit is controlled to scan the environment information and position the robot road sign to shoot a road sign image of the robot road sign and perform image recognition on the road sign image to acquire first indication information indicated by the robot road sign.
In an embodiment of the present application, there is provided a robot navigation method including: judging whether a robot road sign is detected or not in the process of robot traveling; if the robot road sign is detected, first indication information indicated by the robot road sign is acquired; replanning the path according to the first indication information to obtain a replanned target path; and controlling the robot to continue to travel based on the target path. Through detecting the robot road sign, the first indication information indicated by the robot road sign is obtained to replan the path and control the robot to continue to advance based on the replanned target path.
Referring to fig. 10 again, fig. 10 is a schematic structural diagram of a robot navigation device according to an embodiment of the present disclosure;
the robot navigation device is applied to a robot, and particularly applied to one or more processors of the robot.
As shown in fig. 10, the robot navigation device 101 includes:
a judging unit 1011, configured to judge whether a robot landmark is detected during the robot traveling process;
an obtaining unit 1012, configured to obtain first indication information indicated by a robot landmark if the robot landmark is detected;
a planning unit 1013 configured to re-plan a path according to the first indication information to obtain a re-planned target path;
and a traveling unit 1014 for controlling the robot to continue traveling based on the target path.
In this embodiment of the present application, the first indication information includes traffic information, temporary no-traffic information, and permanent no-traffic information, and the planning unit 1013 is specifically configured to:
if the first indication information is the traffic information, planning a first path, and taking the first path as a target path, wherein the first path comprises a road section corresponding to the traffic information;
if the first indication information is temporary no-pass information, planning a second path, taking the second path as a target path, and executing corresponding actions according to subsequently acquired information indicated by the robot road sign;
if the first indication information is the long-term no-entry information, a third path is planned and taken as a target path, wherein the third path does not include a road section corresponding to the long-term no-entry information.
In the embodiment of the application, the traffic information comprises a first road section and a traveling direction;
planning a first path, comprising:
acquiring a global map;
planning a plurality of sub-paths along the traveling direction between the current position of the robot and the target position to be reached by the robot according to the global map;
and determining a target path from the plurality of sub-paths according to the first road section, wherein the target path passes through the first road section.
In the embodiment of the present application, the planning unit 1013 is further configured to:
planning a second path according to queuing position information sent by the robot road sign to enable the robot to enter a queuing waiting mode;
and when the robot is in a queuing waiting mode, continuously acquiring second indication information indicated by the robot road sign, and sequentially passing according to the second indication information, wherein the second indication information is queuing information for sequentially passing the robot.
In the embodiment of the present application, the robot landmark includes a first communication module, the robot includes a second communication module, and the obtaining unit 1012 is further configured to:
when the distance between the robot and the robot road sign is smaller than a first distance threshold value, a first communication module of the robot road sign is communicated with a second communication module of the robot;
and when the identification signal sent by the first communication module of the robot road sign is received, determining that the robot road sign is detected.
In the embodiment of the present application, the robot further includes a camera unit, and after receiving the identification signal sent by the first communication module of the robot landmark, the obtaining unit 1012 is further configured to:
if the robot road sign is detected, first indication information indicated by the robot road sign is acquired, and the first indication information comprises:
if the robot road sign is detected according to the received identification signal, acquiring a road sign image of the robot road sign through a camera unit;
according to the landmark image, first indication information indicated by the robot landmark is obtained.
In this embodiment of the application, the obtaining unit 1012 is specifically configured to:
extracting direction marks and/or text contents displayed on the road sign images;
and determining first indication information indicated by the robot signposts according to the extracted direction identifications and/or text contents.
In the embodiment of the present application, the traveling unit 1014 is further configured to:
acquiring first stay time of the robot in a preset range of a robot road sign;
if the first residence time is larger than a first time threshold value, determining that the robot is in danger;
after the robot is determined to be in danger, acquiring second stay time of the robot within a preset range of a robot road sign;
and if the second retention time is greater than the second time threshold, determining that the robot cannot safely escape, and sending a distress signal to a terminal in communication connection with the robot so as to inform a technician corresponding to the terminal to assist in escaping.
In the embodiment of the present application, the robot navigation device may also be built by hardware components, for example, the robot navigation device may be built by one or more than two chips, and the chips may work in coordination with each other to complete the robot navigation method described in the above embodiments. For another example, the robot navigation device may also be built from various types of logic devices, such as a general purpose processor, a Digital Signal Processor (DSP), an Application Specific Integrated Circuit (ASIC), a Field Programmable Gate Array (FPGA), a single chip, an arm (acorn RISC machine) or other programmable logic device, discrete gate or transistor logic, discrete hardware components, or any combination of these components.
The robot navigation device in the embodiment of the present application may be a device, or may be a component, an integrated circuit, or a chip in a terminal. The device can be mobile electronic equipment or non-mobile electronic equipment. By way of example, the mobile electronic device may be a mobile phone, a tablet computer, a notebook computer, a palm top computer, a vehicle-mounted electronic device, a wearable device, an ultra-mobile personal computer (UMPC), a netbook or a Personal Digital Assistant (PDA), and the like, and the non-mobile electronic device may be a server, a Network Attached Storage (NAS), a Personal Computer (PC), a Television (TV), a teller machine or a self-service machine, and the like, and the embodiments of the present application are not particularly limited.
The robot navigation device in the embodiment of the present application may be a device having an operating system. The operating system may be an Android (Android) operating system, an ios operating system, or other possible operating systems, and embodiments of the present application are not limited specifically.
The robot navigation device provided by the embodiment of the application can realize each process realized in fig. 2, and is not repeated here to avoid repetition.
The robot navigation device may perform the robot navigation method provided by the embodiments of the present application, and has functional modules and beneficial effects corresponding to the performed method. For technical details not described in detail in the embodiments of the robot navigation device, reference may be made to the robot navigation method provided in the above embodiments.
In an embodiment of the present application, there is provided a robot navigation device including: the judging unit is used for judging whether a robot road sign is detected or not in the advancing process of the robot; the acquisition unit is used for acquiring first indication information indicated by the robot road sign if the robot road sign is detected; the planning unit is used for replanning the path according to the first indication information so as to obtain a replanned target path; and the traveling unit is used for controlling the robot to continue traveling based on the target path. Through detecting the robot road sign, the first indication information indicated by the robot road sign is obtained to replan the path and control the robot to continue to advance based on the replanned target path.
Referring to fig. 11 again, fig. 11 is a schematic structural diagram of a robot according to an embodiment of the present disclosure;
as shown in fig. 11, the robot 110 includes one or more processors 111 and memory 112. In fig. 11, one processor 111 is taken as an example.
The processor 111 and the memory 112 may be connected by a bus or other means, such as the bus connection in fig. 11.
A processor 111 for providing computing and control capabilities for controlling the robot 110 to perform corresponding tasks, for example, controlling the robot 110 to perform a robot navigation method in any of the above method embodiments, comprising: judging whether a robot road sign is detected or not in the process of robot traveling; if the robot road sign is detected, first indication information indicated by the robot road sign is acquired; replanning the path according to the first indication information to obtain a replanned target path; and controlling the robot to continue to travel based on the target path.
Through detecting the robot road sign, the first indication information indicated by the robot road sign is obtained to replan the path and control the robot to continue to advance based on the replanned target path.
Processor 111 may be a general-purpose Processor including a Central Processing Unit (CPU), a Network Processor (NP), a hardware chip, or any combination thereof; it may also be a Digital Signal Processor (DSP), an Application Specific Integrated Circuit (ASIC), a Programmable Logic Device (PLD), or a combination thereof. The PLD may be a Complex Programmable Logic Device (CPLD), a field-programmable gate array (FPGA), a General Array Logic (GAL), or any combination thereof.
The memory 112, which is a non-transitory computer readable storage medium, may be used to store non-transitory software programs, non-transitory computer executable programs, and modules, such as program instructions/modules corresponding to the robot navigation method in the embodiments of the present application. The processor 111 may implement the robot navigation method in any of the method embodiments described below by running non-transitory software programs, instructions, and modules stored in the memory 112. In particular, memory 112 may include Volatile Memory (VM), such as Random Access Memory (RAM); the memory 112 may also include a non-volatile memory (NVM), such as a read-only memory (ROM), a flash memory (flash memory), a Hard Disk Drive (HDD) or a solid-state drive (SSD), or other non-transitory solid-state memory devices; the memory 112 may also comprise a combination of the above types of memory.
The memory 112 may include high speed random access memory and may also include non-volatile memory, such as at least one magnetic disk storage device, flash memory device, or other non-volatile solid state storage device. In some embodiments, the memory 112 may optionally include memory located remotely from the processor 111, which may be connected to the processor 111 via a network. Examples of such networks include, but are not limited to, the internet, intranets, local area networks, mobile communication networks, and combinations thereof.
One or more modules are stored in the memory 112, which when executed by the one or more processors 111 perform the robot navigation method of any of the above-described method embodiments, e.g., perform the various steps illustrated in fig. 2 described above; the functions of the respective modules or units of fig. 10 may also be implemented.
In this embodiment of the application, the robot 110 may further include a wired or wireless network interface, a keyboard, an input/output interface, and other components to facilitate input and output, and the robot 110 may further include other components for implementing functions of the device, which is not described herein again.
The robot of the embodiment of the present application exists in various forms, and performs the above-described respective steps shown in fig. 2; the functions of the various elements of FIG. 10 may also be implemented, including but not limited to: cleaning robot, service robot, remote monitoring robot, sweeping robot, etc.
Embodiments of the present application also provide a computer-readable storage medium, such as a memory, including program code, which is executable by a processor to perform the robot navigation method in the above embodiments. For example, the computer-readable storage medium may be a Read-Only Memory (ROM), a Random Access Memory (RAM), a Compact Disc Read-Only Memory (CDROM), a magnetic tape, a floppy disk, an optical data storage device, and the like.
Embodiments of the present application also provide a computer program product including one or more program codes stored in a computer readable storage medium. The processor of the electronic device reads the program code from the computer-readable storage medium, and the processor executes the program code to perform the method steps of the robot navigation method provided in the above-described embodiments.
It will be understood by those skilled in the art that all or part of the steps for implementing the above embodiments may be implemented by hardware, or may be implemented by hardware associated with program code, and the program may be stored in a computer readable storage medium, where the above mentioned storage medium may be a read-only memory, a magnetic or optical disk, etc.
Through the above description of the embodiments, it is obvious to those skilled in the art that the embodiments may be implemented by software plus a general hardware platform, and may also be implemented by hardware. It will be understood by those skilled in the art that all or part of the processes of the methods of the embodiments described above can be implemented by hardware related to instructions of a computer program, which can be stored in a computer-readable storage medium, and when executed, can include the processes of the embodiments of the methods described above. The storage medium may be a magnetic disk, an optical disk, a Read-Only Memory (ROM), a Random Access Memory (RAM), or the like.
Finally, it should be noted that: the above embodiments are only used to illustrate the technical solutions of the present application, and not to limit the same; within the context of the present application, features from the above embodiments or from different embodiments may also be combined, steps may be implemented in any order, and there are many other variations of the different aspects of the present application as described above, which are not provided in detail for the sake of brevity; although the present application has been described in detail with reference to the foregoing embodiments, it should be understood by those of ordinary skill in the art that: the technical solutions described in the foregoing embodiments may still be modified, or some technical features may be equivalently replaced; and the modifications or the substitutions do not make the essence of the corresponding technical solutions depart from the scope of the technical solutions of the embodiments of the present application.

Claims (10)

1. A robot navigation method, comprising:
judging whether a robot road sign is detected or not in the process of robot traveling;
if the robot road sign is detected, acquiring first indication information indicated by the robot road sign;
replanning a path according to the first indication information to obtain a replanned target path;
controlling the robot to continue traveling based on the target path.
2. The method according to claim 1, wherein the replanning a path according to the first indication information to obtain a replanned target path comprises:
if the first indication information is pass information, planning a first path, and taking the first path as a target path, wherein the first path comprises a road section corresponding to the pass information;
if the first indication information is temporary no-pass information, planning a second path, taking the second path as a target path, and executing corresponding actions according to subsequently acquired information indicated by the robot road sign;
if the first indication information is long-term no-entry information, planning a third path, and taking the third path as a target path, wherein the third path does not include a road section corresponding to the long-term no-entry information.
3. The method of claim 2, wherein the traffic information includes a first road segment and a direction of travel;
the planning a first path includes:
acquiring a global map;
planning a plurality of sub paths along the traveling direction between the current position of the robot and the target position to be reached by the robot according to the global map;
and determining a target path from the plurality of sub-paths according to the first road section, wherein the target path passes through the first road section.
4. The method of claim 2, wherein the planning a second path, taking the second path as a target path, and performing corresponding actions according to subsequently acquired information indicated by the robot landmark comprises:
planning a second path according to queuing position information sent by the robot road sign to enable the robot to enter a queuing waiting mode;
and when the robot is in a queuing waiting mode, continuously acquiring second indication information indicated by the robot road sign, and sequentially passing according to the second indication information, wherein the second indication information is queuing information for sequentially passing the robot.
5. The method of claim 1, wherein the robotic road sign comprises a first communication module, the robot comprises a second communication module;
the process of marcing at the robot, judge whether detect the robot road sign, include:
when the distance between the robot and the robot road sign is smaller than a first distance threshold value, a first communication module of the robot road sign communicates with a second communication module of the robot;
and when the identification signal sent by the first communication module of the robot road sign is received, determining that the robot road sign is detected.
6. The method of claim 5, wherein the robot further comprises a camera unit;
if the robot road sign is detected, acquiring first indication information indicated by the robot road sign, wherein the first indication information comprises:
if the robot road sign is detected according to the received identification signal, acquiring a road sign image of the robot road sign through the camera unit;
and acquiring first indication information indicated by the robot road sign according to the road sign image.
7. The method of claim 6, wherein obtaining first indication information indicated by the robot landmark from the landmark image comprises:
extracting the direction identification and/or text content displayed on the road sign image;
and determining the first indication information indicated by the robot road sign according to the extracted direction identification and/or text content.
8. The method according to any one of claims 1-7, further comprising:
acquiring first stay time of the robot in a preset range of the robot road sign;
if the first dwell time is greater than a first time threshold, determining that the robot is in danger;
after the robot is determined to be in danger, acquiring second stay time of the robot within a preset range of a robot road sign;
and if the second retention time is greater than a second time threshold, determining that the robot cannot safely escape, and sending a distress signal to a terminal in communication connection with the robot to inform a technician corresponding to the terminal to assist in escaping from the danger.
9. A robotic navigation device, comprising:
the judging unit is used for judging whether a robot road sign is detected or not in the advancing process of the robot;
the system comprises an acquisition unit, a display unit and a control unit, wherein the acquisition unit is used for acquiring first indication information indicated by a robot road sign if the robot road sign is detected;
the planning unit is used for replanning the path according to the first indication information so as to obtain a replanned target path;
and the traveling unit is used for controlling the robot to continue traveling based on the target path.
10. A robot, comprising:
at least one processor; and
a memory communicatively coupled to the at least one processor; wherein,
the memory stores instructions executable by the at least one processor to enable the at least one processor to perform the robot navigation method of any one of claims 1-8.
CN202210095419.4A 2022-01-26 2022-01-26 Robot navigation method and device and robot Pending CN114489077A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202210095419.4A CN114489077A (en) 2022-01-26 2022-01-26 Robot navigation method and device and robot

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202210095419.4A CN114489077A (en) 2022-01-26 2022-01-26 Robot navigation method and device and robot

Publications (1)

Publication Number Publication Date
CN114489077A true CN114489077A (en) 2022-05-13

Family

ID=81477430

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202210095419.4A Pending CN114489077A (en) 2022-01-26 2022-01-26 Robot navigation method and device and robot

Country Status (1)

Country Link
CN (1) CN114489077A (en)

Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105526940A (en) * 2014-09-30 2016-04-27 高德软件有限公司 Vehicle navigation method and apparatus, electronic map display method and apparatus thereof
CN105974928A (en) * 2016-07-29 2016-09-28 哈尔滨工大服务机器人有限公司 Robot navigation route planning method
CN107664502A (en) * 2016-07-28 2018-02-06 奥迪股份公司 The method, apparatus and system of dynamic programming path
CN109394086A (en) * 2018-11-19 2019-03-01 珠海市微半导体有限公司 A kind of walk on method, apparatus and chip based on trapped clean robot
US20190129433A1 (en) * 2016-12-29 2019-05-02 Amicro Semiconductor Corporation A path planning method of intelligent robot
CN112284389A (en) * 2020-09-28 2021-01-29 深圳优地科技有限公司 Mobile robot path planning method and device, mobile robot and storage medium
CN112540607A (en) * 2020-04-03 2021-03-23 深圳优地科技有限公司 Path planning method and device, electronic equipment and storage medium
CN113008250A (en) * 2021-02-20 2021-06-22 京东鲲鹏(江苏)科技有限公司 Unmanned vehicle navigation method and device

Patent Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105526940A (en) * 2014-09-30 2016-04-27 高德软件有限公司 Vehicle navigation method and apparatus, electronic map display method and apparatus thereof
CN107664502A (en) * 2016-07-28 2018-02-06 奥迪股份公司 The method, apparatus and system of dynamic programming path
CN105974928A (en) * 2016-07-29 2016-09-28 哈尔滨工大服务机器人有限公司 Robot navigation route planning method
US20190129433A1 (en) * 2016-12-29 2019-05-02 Amicro Semiconductor Corporation A path planning method of intelligent robot
CN109394086A (en) * 2018-11-19 2019-03-01 珠海市微半导体有限公司 A kind of walk on method, apparatus and chip based on trapped clean robot
CN112540607A (en) * 2020-04-03 2021-03-23 深圳优地科技有限公司 Path planning method and device, electronic equipment and storage medium
CN112284389A (en) * 2020-09-28 2021-01-29 深圳优地科技有限公司 Mobile robot path planning method and device, mobile robot and storage medium
CN113008250A (en) * 2021-02-20 2021-06-22 京东鲲鹏(江苏)科技有限公司 Unmanned vehicle navigation method and device

Similar Documents

Publication Publication Date Title
US12054176B2 (en) Trajectory generation and execution architecture
US11755018B2 (en) End-to-end interpretable motion planner for autonomous vehicles
US11378955B2 (en) Planning autonomous motion
US20180328745A1 (en) Coverage plan generation and implementation
CN109429518A (en) Automatic Pilot traffic forecast based on map image
CN110389585A (en) The speed planning device based on study for automatic driving vehicle
US20210389133A1 (en) Systems and methods for deriving path-prior data using collected trajectories
US11904906B2 (en) Systems and methods for prediction of a jaywalker trajectory through an intersection
WO2019089015A1 (en) Autonomous vehicle operation with explicit occlusion reasoning
US11543259B2 (en) Determining landmark detectability
US11531349B2 (en) Corner case detection and collection for a path planning system
KR20220047713A (en) Ground plane estimation using lidar semantic network
CN112581790A (en) Vehicle obstacle avoidance method and device, computing equipment and storage medium
US11645775B1 (en) Methods and apparatus for depth estimation on a non-flat road with stereo-assisted monocular camera in a vehicle
US12128929B2 (en) Methods and system for predicting trajectories of actors with respect to a drivable area
WO2022188333A1 (en) Walking method and apparatus, and computer storage medium
CN114489077A (en) Robot navigation method and device and robot
EP3454269A1 (en) Planning autonomous motion
US20230394677A1 (en) Image-based pedestrian speed estimation
CN115534944A (en) Vehicle control method and device based on high-precision map and electronic equipment
CN111975775A (en) Autonomous robot navigation method and system based on multi-angle visual perception
EP4131181A1 (en) Methods and system for predicting trajectories of actors with respect to a drivable area
EP4174611B1 (en) Mobile robot and a method for controlling the mobile robot
EP4139765B1 (en) Methods, devices and systems for facilitating operations of mobile robots
EP4386510A1 (en) Methods and systems for handling occlusions in operation of autonomous vehicle

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
CB02 Change of applicant information
CB02 Change of applicant information

Country or region after: China

Address after: Unit 7-11, 6th Floor, Building B2, No. 999-8 Gaolang East Road, Wuxi Economic Development Zone, Wuxi City, Jiangsu Province, China 214000

Applicant after: Youdi Robot (Wuxi) Co.,Ltd.

Address before: 5D, Building 1, Tingwei Industrial Park, No. 6 Liufang Road, Xingdong Community, Xin'an Street, Bao'an District, Shenzhen City, Guangdong Province

Applicant before: UDITECH Co.,Ltd.

Country or region before: China