[go: up one dir, main page]
More Web Proxy on the site http://driver.im/

CN117796711B - Method for processing escape of robot, robot and storage medium - Google Patents

Method for processing escape of robot, robot and storage medium Download PDF

Info

Publication number
CN117796711B
CN117796711B CN202410226556.6A CN202410226556A CN117796711B CN 117796711 B CN117796711 B CN 117796711B CN 202410226556 A CN202410226556 A CN 202410226556A CN 117796711 B CN117796711 B CN 117796711B
Authority
CN
China
Prior art keywords
robot
scheduling
strategy
path
trapped
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202410226556.6A
Other languages
Chinese (zh)
Other versions
CN117796711A (en
Inventor
葛科迪
汪鹏飞
马子昂
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Hangzhou Huacheng Software Technology Co Ltd
Original Assignee
Hangzhou Huacheng Software Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Hangzhou Huacheng Software Technology Co Ltd filed Critical Hangzhou Huacheng Software Technology Co Ltd
Priority to CN202410226556.6A priority Critical patent/CN117796711B/en
Publication of CN117796711A publication Critical patent/CN117796711A/en
Application granted granted Critical
Publication of CN117796711B publication Critical patent/CN117796711B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • AHUMAN NECESSITIES
    • A47FURNITURE; DOMESTIC ARTICLES OR APPLIANCES; COFFEE MILLS; SPICE MILLS; SUCTION CLEANERS IN GENERAL
    • A47LDOMESTIC WASHING OR CLEANING; SUCTION CLEANERS IN GENERAL
    • A47L11/00Machines for cleaning floors, carpets, furniture, walls, or wall coverings
    • A47L11/24Floor-sweeping machines, motor-driven
    • AHUMAN NECESSITIES
    • A01AGRICULTURE; FORESTRY; ANIMAL HUSBANDRY; HUNTING; TRAPPING; FISHING
    • A01DHARVESTING; MOWING
    • A01D34/00Mowers; Mowing apparatus of harvesters
    • A01D34/006Control or measuring arrangements
    • A01D34/008Control or measuring arrangements for automated or remotely controlled operation
    • AHUMAN NECESSITIES
    • A47FURNITURE; DOMESTIC ARTICLES OR APPLIANCES; COFFEE MILLS; SPICE MILLS; SUCTION CLEANERS IN GENERAL
    • A47LDOMESTIC WASHING OR CLEANING; SUCTION CLEANERS IN GENERAL
    • A47L11/00Machines for cleaning floors, carpets, furniture, walls, or wall coverings
    • A47L11/40Parts or details of machines not provided for in groups A47L11/02 - A47L11/38, or not restricted to one of these groups, e.g. handles, arrangements of switches, skirts, buffers, levers
    • A47L11/4002Installations of electric equipment
    • AHUMAN NECESSITIES
    • A47FURNITURE; DOMESTIC ARTICLES OR APPLIANCES; COFFEE MILLS; SPICE MILLS; SUCTION CLEANERS IN GENERAL
    • A47LDOMESTIC WASHING OR CLEANING; SUCTION CLEANERS IN GENERAL
    • A47L11/00Machines for cleaning floors, carpets, furniture, walls, or wall coverings
    • A47L11/40Parts or details of machines not provided for in groups A47L11/02 - A47L11/38, or not restricted to one of these groups, e.g. handles, arrangements of switches, skirts, buffers, levers
    • A47L11/4061Steering means; Means for avoiding obstacles; Details related to the place where the driver is accommodated
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B25HAND TOOLS; PORTABLE POWER-DRIVEN TOOLS; MANIPULATORS
    • B25JMANIPULATORS; CHAMBERS PROVIDED WITH MANIPULATION DEVICES
    • B25J9/00Programme-controlled manipulators
    • B25J9/16Programme controls
    • B25J9/1602Programme controls characterised by the control system, structure, architecture
    • B25J9/161Hardware, e.g. neural networks, fuzzy logic, interfaces, processor
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B25HAND TOOLS; PORTABLE POWER-DRIVEN TOOLS; MANIPULATORS
    • B25JMANIPULATORS; CHAMBERS PROVIDED WITH MANIPULATION DEVICES
    • B25J9/00Programme-controlled manipulators
    • B25J9/16Programme controls
    • B25J9/1656Programme controls characterised by programming, planning systems for manipulators
    • B25J9/1664Programme controls characterised by programming, planning systems for manipulators characterised by motion, path, trajectory planning
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B25HAND TOOLS; PORTABLE POWER-DRIVEN TOOLS; MANIPULATORS
    • B25JMANIPULATORS; CHAMBERS PROVIDED WITH MANIPULATION DEVICES
    • B25J9/00Programme-controlled manipulators
    • B25J9/16Programme controls
    • B25J9/1674Programme controls characterised by safety, monitoring, diagnostic
    • AHUMAN NECESSITIES
    • A47FURNITURE; DOMESTIC ARTICLES OR APPLIANCES; COFFEE MILLS; SPICE MILLS; SUCTION CLEANERS IN GENERAL
    • A47LDOMESTIC WASHING OR CLEANING; SUCTION CLEANERS IN GENERAL
    • A47L2201/00Robotic cleaning machines, i.e. with automatic control of the travelling movement or the cleaning operation
    • AHUMAN NECESSITIES
    • A47FURNITURE; DOMESTIC ARTICLES OR APPLIANCES; COFFEE MILLS; SPICE MILLS; SUCTION CLEANERS IN GENERAL
    • A47LDOMESTIC WASHING OR CLEANING; SUCTION CLEANERS IN GENERAL
    • A47L2201/00Robotic cleaning machines, i.e. with automatic control of the travelling movement or the cleaning operation
    • A47L2201/04Automatic control of the travelling movement; Automatic obstacle detection
    • AHUMAN NECESSITIES
    • A47FURNITURE; DOMESTIC ARTICLES OR APPLIANCES; COFFEE MILLS; SPICE MILLS; SUCTION CLEANERS IN GENERAL
    • A47LDOMESTIC WASHING OR CLEANING; SUCTION CLEANERS IN GENERAL
    • A47L2201/00Robotic cleaning machines, i.e. with automatic control of the travelling movement or the cleaning operation
    • A47L2201/06Control of the cleaning action for autonomous devices; Automatic detection of the surface condition before, during or after cleaning

Landscapes

  • Engineering & Computer Science (AREA)
  • Mechanical Engineering (AREA)
  • Robotics (AREA)
  • Automation & Control Theory (AREA)
  • Evolutionary Computation (AREA)
  • Artificial Intelligence (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Fuzzy Systems (AREA)
  • Mathematical Physics (AREA)
  • Software Systems (AREA)
  • Physics & Mathematics (AREA)
  • Environmental Sciences (AREA)
  • Control Of Position, Course, Altitude, Or Attitude Of Moving Bodies (AREA)

Abstract

The application discloses a method for processing the escape of a robot, which comprises the following steps: the robot obtains the current state of the robot when executing the task; in response to the current state being a trapped state, the robot determines a trapped policy; the robot uses the getting rid of poverty strategy to get rid of poverty so as to continue to execute the task; the escape policy includes a scene escape policy, which is determined based on a trapped scene to which the trapped state belongs. The application also discloses a robot and a storage medium. The application improves the success rate and efficiency of the robot getting rid of poverty.

Description

Method for processing escape of robot, robot and storage medium
Technical Field
The disclosed embodiments of the present application relate to the field of robotics, and more particularly, to a method of handling a robot, and a storage medium.
Background
With the development of information technology and the continuous improvement of life quality requirements of people, intelligent home products gradually appear in daily life of people, wherein representative self-moving robots are more and more favored by people. A self-moving robot is a device for automatically performing a preset work in a prescribed area using an artificial intelligence technique, such as an intelligent mower, a service robot, a sweeping robot, etc. However, due to preset map precision, obstacle mismapping and other reasons, some regions which can pass through the robot can be identified as obstacle regions, so that the robot cannot generate a path to cause being trapped.
Disclosure of Invention
According to an embodiment of the application, the application provides a method for processing the escape of a robot, the robot and a storage medium, so as to solve the problems.
The first aspect of the application discloses a method for processing the escape of a robot, which comprises the following steps: the robot obtains the current state of the robot when executing the task; in response to the current state being a trapped state, the robot determines a trapped policy; the robot uses the getting rid of poverty strategy to get rid of poverty so as to continue to execute the task; the escape policy includes a scene escape policy, which is determined based on a trapped scene to which the trapped state belongs.
In some embodiments, the derating strategy further comprises a historical path derating strategy, wherein the historical path derating strategy is determined based on a historical backtracking path of the robot; the robot uses the escape strategy to escape so as to continue to execute the task, and the escape strategy comprises the following steps: performing a escaping operation by utilizing the history path escaping strategy; responding to failure of escaping by utilizing the history path escaping strategy, and utilizing the scene escaping strategy to conduct escaping operation so as to continue executing the task; and continuing to execute the task in response to successful escape using the historical path escape policy.
In some embodiments, the obtaining the current state of the robot when performing the task includes: obtaining a displacement amount of a preset unit time at the current moment of the robot; responding to the displacement amount being smaller than a preset displacement value, and determining the current state of the robot as a trapped state; responding to the displacement amount being larger than a preset displacement value, and acquiring the current position of the robot and a dispatch target point; and determining that the current state of the robot is a trapped state in response to the current position of the robot and the dispatch-target point not being in the same communication domain.
In some embodiments, the historical back-tracking path of the robot is used for representing that a connecting edge exists between two endpoints respectively located in the connected domain and the non-connected domain in the global graph of the historical back-tracking path, and the robot is located in the connected domain; the performing the escaping operation by using the history path escaping strategy comprises the following steps: acquiring a candidate backtracking path in a candidate escape path set of the robot, wherein the candidate escape path set comprises the historical backtracking path; and executing point-by-point movement based on the candidate backtracking path by using a planning and scheduling method so as to lead the robot to get rid of the poverty.
In some embodiments, performing a point-wise motion based on the candidate backtracking path to cause the robot to get rid of poverty, comprising: in response to collision of the robot with an obstacle in the point-by-point movement, performing a preset collision action; in response to the robot not colliding with an obstacle in the point-wise motion, the robot is successfully escaped.
In some embodiments, in response to completion of the preset collision action, determining whether the robot is out of the connected domain; responding to the fact that the robot breaks away from the communicating domain, and successfully getting rid of the robot; and in response to the robot not being separated from the connected domain, acquiring other candidate backtracking paths in the candidate escape path set of the robot, so as to execute the point-by-point movement to escape the robot.
In some embodiments, identifying the trapped scene includes: acquiring the area of a communication domain where the robot is located; determining the trapped scene as a first trapped scene in response to the area being less than a first preset value; determining that the trapped scene is a second trapped scene in response to the area being greater than the first preset value and less than a second preset value; and determining that the trapped scene is a third trapped scene in response to the area being greater than the second preset value.
In some embodiments, the scene escape strategy comprises at least one of a perceived information edge strategy, a front-back interactive obstacle crossing strategy, a hierarchical expansion map path search scheduling strategy, an edge movement strategy, a sprint scheduling strategy, a left-right interactive escape strategy, an edge-after-schedule strategy, a door opening exploration scheduling strategy and a surround exploration scheduling strategy.
In some embodiments, the operation of getting rid of poverty corresponding to the perception information edge policy includes: receiving information of a sensing sensor to perform short-distance edgewise movement; responding to the fact that the robot breaks away from a current communication domain within a first preset time interval, successfully getting rid of the robot, and setting the current communication domain as a forbidden zone in a planning map; and/or, the operation of getting rid of poverty corresponding to the front-back interactive obstacle crossing strategy comprises the following steps: repeatedly and sequentially driving a first driving wheel of the robot to retreat with a first step length, a second driving wheel of the robot to advance with a second step length, the first driving wheel to retreat with a third step length and driving the second driving wheel to advance with a fourth step length within a second preset time interval so as to get rid of the robot; and/or, the getting rid of poverty operation corresponding to the hierarchical expansion map path search scheduling policy comprises: obtaining information of a perception sensor to construct a local map; performing hierarchical expansion on the local map constructed by the perception sensor information until a scheduling path is searched, wherein a scheduling target point in the scheduling path is any point outside the current communication domain; and performing a getting rid of poverty operation based on the dispatching path so as to get rid of poverty of the robot.
In some embodiments, the operation of getting rid of poverty corresponding to the edgewise motion strategy includes: receiving information of a sensing sensor to perform collision-free edgewise movement; responding to the fact that the robot breaks away from the current communication domain within a third preset time interval, and successfully getting rid of the robot; and/or, the escaping operation corresponding to the sprint-type scheduling strategy comprises the following steps: clearing obstacles with height difference information on a planning map; the robot is retreated to a preset distance, and scheduling planning is conducted on the basis of the emptied planning map, so that the robot gets rid of the problem; and/or, the operation of getting rid of poverty corresponding to the left-right interactive getting rid of poverty strategy comprises the following steps: determining the escaping direction of a narrow area in the perception type sensor information, wherein the left driving mechanism and the right driving mechanism of the robot alternately rotate in opposite directions so as to enable the robot to escape; and/or, the operation of getting rid of poverty corresponding to the edge-first and dispatch-later strategy comprises the following steps: the robot performs preset edgewise movement; after the preset edgewise movement is completed, the robot fails to separate from the current connected domain, and collision or downward-looking obstacle information on a planning map is cleared; and scheduling and planning based on the emptied planning map so as to get rid of the robot.
In some embodiments, the escaping operation corresponding to the door opening exploration scheduling policy includes: performing single scheduling to get rid of poverty; responding to the single scheduling failure, scheduling the robot to any point closest to a door in a current communication domain, and updating door area obstacle information to judge whether a scheduling path exists to deviate from the current communication domain; scheduling planning is conducted based on the scheduling path so that the robot gets rid of the trouble; and/or, the escaping operation corresponding to the surrounding exploration scheduling strategy comprises the following steps: performing single scheduling to get rid of poverty; responding to the single scheduling failure, selecting a preset number of scheduling points in the current communication domain, wherein the scheduling points are uniformly distributed and encircle the communication domain; and scheduling is sequentially performed based on the scheduling points so as to determine a scheduling path, and then the robot is free from being trapped.
The second aspect of the application discloses a robot, which comprises a memory and a processor, wherein the memory and the processor are mutually coupled, and the processor is used for executing program instructions stored in the memory so as to realize the method for processing the robot in the first aspect.
A third aspect of the present application discloses a non-transitory computer-readable storage medium having stored thereon program instructions which, when executed by a processor, implement the method of getting rid of poverty of the robot described in the first aspect.
The application has the beneficial effects that: the current state of the robot when the robot executes the task is obtained, the robot responds to the current state as a trapped state, the robot determines a trapping strategy, and the robot further uses the trapping strategy to perform trapping so as to continue executing the task, wherein the trapping strategy comprises a scene trapping strategy, and the scene trapping strategy is determined based on the trapped scene to which the trapped state belongs, so that the customized scene trapping is realized, and the trapping success rate and the trapping efficiency of the robot are improved.
Drawings
The application will be further described with reference to the accompanying drawings and embodiments, in which:
fig. 1 is a flow chart of a method for processing a robot to get rid of poverty according to an embodiment of the application;
FIG. 2 is a schematic flow chart of a escaping process of the robot according to an embodiment of the application;
Fig. 3 is a schematic structural view of a robot according to an embodiment of the present application;
Fig. 4 is a schematic structural view of a nonvolatile computer-readable storage medium according to an embodiment of the present application.
Detailed Description
Reference in the specification to "an embodiment" means that a particular feature, structure, or characteristic described in connection with the embodiment may be included in at least one embodiment of the application. The appearances of such phrases in various places in the specification are not necessarily all referring to the same embodiment, nor are separate or alternative embodiments mutually exclusive of other embodiments. Those of skill in the art will explicitly and implicitly appreciate that the embodiments described herein may be combined with other embodiments.
The term "and/or" in the present application is merely an association relation describing the association object, and indicates that three kinds of relations may exist, for example, a and/or B may indicate: a exists alone, A and B exist together, and B exists alone. In addition, the character "/" herein generally indicates that the front and rear associated objects are an "or" relationship. Further, "a plurality" herein means two or more than two. In addition, the term "at least one" herein means any one of a plurality or any combination of at least two of a plurality, for example, including at least one of A, B, C, may mean including any one or more elements selected from the group consisting of A, B and C. Furthermore, the terms "first," "second," and "third" in this disclosure are used for descriptive purposes only and are not to be construed as indicating or implying a relative importance or implicitly indicating the number of technical features indicated.
In order to make the technical scheme of the present application better understood by those skilled in the art, the technical scheme of the present application will be further described in detail with reference to the accompanying drawings and the detailed description.
Referring to fig. 1, fig. 1 is a flow chart of a method for processing a robot to get rid of a trapping problem according to an embodiment of the application. The execution subject of the method may be a robot with computing functions, such as a smart mower, a service robot, a sweeping robot, etc.
It should be noted that, if there are substantially the same results, the method of the present application is not limited to the flow sequence shown in fig. 1.
In some possible implementations, the method may be implemented by a processor invoking computer readable instructions stored in a memory, as shown in fig. 1, and may include the steps of:
s11: the robot obtains a current state of the robot when executing the task.
The robot may be a self-moving robot, i.e., a device that automatically performs a preset work in a prescribed area using artificial intelligence technology, such as an intelligent mower, a service robot, a sweeping robot, etc. The method comprises the steps of obtaining the current state of the robot when the robot executes a task, namely, when the robot executes a task to issue, for example, working on a preset path, and obtaining the current state of the robot, wherein the current state comprises a trapped state and a non-trapped state.
S12: in response to the current state being a trapped state, the robot determines a trapped policy.
And responding to the current state as the trapped state, namely determining that the current state of the robot in which the task is executed is the trapped state, and further determining and acquiring a trapping strategy by the robot, wherein the trapping strategy is used for helping the trapped robot to get trapped in different trapping scenes.
S13: the robot performs the getting rid of the trapping by utilizing the getting rid of the trapping strategy to continue to execute the task, wherein the getting rid of the trapping strategy comprises a scene getting rid of the trapping strategy, and the scene getting rid of the trapping strategy is determined based on the trapping scene to which the trapping state belongs.
The robot uses the determined getting rid of the trapping strategy to get rid of the trapping, wherein the getting rid of the trapping strategy comprises a scene getting rid of the trapping strategy, the scene getting rid of the trapping strategy is determined based on the trapping scene of the robot, namely, the scene getting rid of the trapping strategies corresponding to different trapping scenes are different, for example, the robot is in the trapping state a, the trapping scene corresponding to the trapping state a is A, at the moment, the robot determines the getting rid of the trapping strategy A1 based on the trapping scene A, and then the getting rid of the trapping strategy A1 is used for getting rid of the trapping.
In this embodiment, the current state of the robot when executing the task is obtained by the robot, the robot determines the getting-off strategy in response to the current state being the trapped state, and further the robot uses the getting-off strategy to get off so as to continue executing the task, wherein the getting-off strategy includes a scene getting-off strategy, and the scene getting-off strategy is determined based on the trapped scene to which the trapped state belongs, thereby realizing the customized scene getting-off and improving the getting-off success rate and efficiency of the robot.
In some embodiments, obtaining a current state of the robot when performing the task includes: acquiring the displacement of a preset unit time at the current moment of the robot; responding to the displacement less than the preset displacement value, and determining the current state of the robot as a trapped state; and/or acquiring the current position and the dispatch target point of the robot in response to the displacement being greater than a preset displacement value; and determining that the current state of the robot is a trapped state in response to the current position of the robot and the dispatch target point not being in the same communication domain.
Acquiring a displacement amount of a preset unit time at the current moment of the robot, determining the current state of the robot as a trapped state in response to the displacement amount being smaller than a preset displacement value, e.g. calculating a time interval from the robot to the current momentDisplacement/>When the displacement/>When the displacement interval is smaller than the displacement threshold value X1, the robot is determined to be trapped at the current moment, and at the moment, the displacement interval can be set to be obstacle information on a planning map, wherein the time interval/>The displacement threshold X1 may be set according to the use scenario. Responsive to the displacement being greater than a preset displacement value, e.g. when the displacement/>And when the displacement threshold value X1 is larger than the displacement threshold value X1, acquiring the current position and the dispatch target point of the robot, and determining that the current state of the robot is a stranded state in response to the fact that the current position and the dispatch target point of the robot are not in the same communication domain, namely judging whether the current position and the dispatch target point of the robot are in the same communication domain on a planning map, and if the current position and the dispatch target point of the robot are not in the same communication domain, the robot is considered to be in the stranded state currently, wherein the communication domain can be understood as a space/room in which the robot can operate.
It can be understood that the robot performs trapping detection when executing tasks, and the trapping is divided into narrow trapping and generalized trapping, wherein the narrow trapping is that the robot moves in a narrow area in unit amount of time, and the generalized trapping is that the target scheduling point of the robot is not in the same communication area with the position of the robot, so that the trapping detection is more universal and robust, and the subsequent trapping-removing success rate of the robot is further improved.
In some embodiments, the derating strategy further comprises a historical path derating strategy, wherein the historical path derating strategy is determined based on a historical backtracking path of the robot; the robot uses the escape strategy to escape so as to continue to execute tasks, comprising: performing a escaping operation by using a history path escaping strategy; responding to failure of escaping by utilizing the history path escaping strategy, and utilizing the scene escaping strategy to conduct escaping operation so as to continue executing tasks; in response to successfully getting rid of the trap with the history path getting rid of the policy, the task is continued to be executed.
In response to the current state of the robot in executing the task being a trapped state, the robot determines a trapped state removal strategy, wherein the trapped state removal strategy comprises a scene trapped state removal strategy and a historical path trapped state removal strategy, the scene trapped state removal strategy is determined based on a trapped scene to which the trapped state belongs, and the historical path trapped state removal strategy is determined based on a historical backtracking path of the robot. The robot uses the escaping strategy to escape, can use the scene escaping strategy to escape, can use the history path escaping strategy to escape, can also use the scene escaping strategy to escape after escaping failure of the history path escaping strategy, so as to enable the robot to continue to execute tasks, thereby completing issuing tasks.
In some embodiments, the historical back-tracking path of the robot is used to characterize that there is a connecting edge between two endpoints respectively located in the connected domain and the non-connected domain in the global graph of the historical back-tracking path, where the robot is located in the connected domain.
Searching whether two endpoints are respectively positioned in a connected domain and a non-connected domain in the history backtracking path global diagram or not, and connecting edges are arranged between the two endpoints, wherein the endpoints in the history backtracking path global diagram are formed by key position points of a robot in the task execution process, and the connecting edges between the endpoints are history schedulable paths; the non-connected domain may be the remaining connected domain on the planning map other than the connected domain, for example, other rooms. If two endpoints exist in the history backtracking path global graph and are respectively positioned in the connected domain and the non-connected domain, and a connecting edge exists between the two endpoints, the history backtracking path is considered to exist and is screened into the candidate escape path setIn the method, P is a candidate history escape path set,/>Denoted as the ith candidate historical escape path in set P,/>The number of historical escape routes for the candidate.
At this time, the history path escape strategy is utilized to perform escape operation, including: acquiring a candidate backtracking path in a candidate escape path set of the robot, wherein the candidate escape path set comprises a historical backtracking path; and performing point-to-point motion based on the candidate backtracking path by using a planning and scheduling method so as to make the robot get rid of the problem.
Acquiring candidate traceback paths in a candidate set of getting rid of poverty paths of the robot, e.g. selecting candidate traceback paths in a candidate set of getting rid of poverty paths PFurther, by using the planning and dispatching method, point-by-point motion is performed based on the candidate backtracking path, so that the robot gets rid of the trouble, for example, the robot is firstly planned and dispatched to the path/>, which is located in the connected domainAt the head end point location, then execute path/>The planning and scheduling method can be selected from A, theta, RRT (Rapidly-Exploring Random Tree, RRT), PRM (Probabilistic Roadmap Method) and the like.
Specifically, in some embodiments, performing point-wise motions based on candidate backtracking paths to cause the robot to get rid of poverty includes: in response to the robot colliding with the obstacle in the point-by-point movement, performing a preset collision action; in response to the robot not colliding with the obstacle in the point-by-point motion, the robot is successfully escaped.
Acquiring candidate backtracking paths in a candidate escape path set of the robot, executing point-by-point movement based on the candidate backtracking paths by using a planning and scheduling method, judging that the robot is successfully escape if the robot does not collide with an obstacle in the point-by-point movement, and setting a connected domain at the position as a forbidden zone on a planning map so as to prevent the robot from entering secondarily; if the robot collides with an obstacle in the point-by-point movement, a preset collision action is performed, for example, when the left collision plate is triggered, a right-hand movement is performed, and when the right collision plate is triggered, a left-hand movement is performed.
In some embodiments, in response to completing the preset collision action, determining whether the robot is out of the connected domain; responding to the fact that the robot is separated from the connected domain, and successfully getting rid of the robot; and in response to the robot not being separated from the connected domain, acquiring other candidate backtracking paths in the candidate escape path set of the robot, so as to execute point-by-point movement to escape the robot.
If the robot collides with the obstacle in the point-by-point movement, a preset collision action is executed, for example, when the left collision plate is triggered, right movement is executed, when the right collision plate is triggered, left movement is executed, after the preset collision action is completed, whether the robot is separated from the connected domain is judged, if the robot is separated from the connected domain, the robot is judged to be successfully separated, if the robot is not separated from the connected domain, other candidate backtracking paths in a candidate backtracking path set of the robot are obtained, and point-by-point movement is executed until the robot is separated, for example, the robot is separated from the candidate backtracking pathFailure to get rid of the failure to get this path/>And removing the robot from the candidate path set P, judging whether other candidate escape paths exist in the candidate escape path set P, if so, acquiring corresponding candidate escape paths, and performing point-by-point movement by using a planning and scheduling method to successfully escape the robot.
In some embodiments, identifying a trapped scene includes: acquiring the area of a communication domain where the robot is located; determining the trapped scene as a first trapped scene in response to the area being smaller than a first preset value; determining the trapped scene as a second trapped scene in response to the area being greater than the first preset value and less than the second preset value; and determining the trapped scene as a third trapped scene in response to the area being greater than the second preset value.
The trapped scene is identified, namely the trapped scene is identified and classified, the area of the communication domain where the robot is located is obtained, for example, the area S of the communication domain where the current robot is located is calculated, the trapped scene is determined to be a first trapped scene in response to the area S being smaller than a first preset value, for example, the area S of the communication domain where the robot is located is smaller than a first preset value S1, the trapped scene of the robot is determined to be the first trapped scene, namely the robot is determined to be trapped in a small area scene, and further, the first trapped scene is finely classified by means of a single or multiple sensing sensors, so that the common scenes in the small area scene trapping can be obtained, such as the robot is blocked by an obstacle, the robot is blocked by a threshold, the general small area trapping is realized.
In response to the area being larger than the first preset value and smaller than the second preset value, determining that the trapped scene is a second trapped scene, for example, the area S of the communication domain where the robot is located is larger than the first preset value S1 and smaller than the second preset value S2, judging that the robot trapped scene is the second trapped scene, namely, judging that the robot is trapped in the middle-area scene, further, classifying the second trapped scene by means of a single or multiple perception sensors, so as to obtain the common scene bed bottom/sofa bottom trapped in the middle-area scene trapped, kitchen/toilet trapped, narrow channel trapped, general middle-area trapped and the like.
In response to the area being larger than the second preset value, determining that the trapped scene is a third trapped scene, for example, the area S of the connected domain where the robot is located is larger than the second preset value S2, judging that the robot trapped scene is the third trapped scene, namely, judging that the robot is trapped in the large-area scene, further, carrying out fine scene classification on the third trapped scene by means of a single or multiple perception type sensors, so that the common scenes in the large-area scene trapping can be obtained, such as room trapping, general large-area trapping and the like, for example, the robot is trapped in the large room (map information is not updated).
The scene classification is mainly obtained by analyzing sensing information (point cloud or RGB image) acquired by a laser radar, a depth camera, a line laser and an RGB camera for feature analysis and identification, namely, the trapped scene is roughly classified by the area of the connected domain, and then the roughly classified trapped scene is intelligently identified by the sensing information, so that the success rate and the robustness of the trapped scene identification are improved, and a foundation is provided for a scene customization trapping strategy.
In some embodiments, the scene escape strategy comprises at least one of a perceived information edge strategy, a front-back interactive obstacle crossing strategy, a hierarchical expansion map path search scheduling strategy, an edge movement strategy, a sprint scheduling strategy, a left-right interactive escape strategy, an edge-first-after-scheduling strategy, a door opening exploration scheduling strategy, and a round-robin exploration scheduling strategy.
The robot is used for getting rid of the trapping, so as to continue to execute the task, the getting rid of the trapping comprises a scene getting rid of the trapping, the scene getting rid of the trapping is determined based on the trapping scenes of the trapping state, the trapping scenes comprise a small area scene trapping, a middle area scene trapping, a large area scene and the like, wherein the scene getting rid of the trapping comprises a sensing information edge strategy, a front-back interactive obstacle crossing strategy, a layered expansion map path searching and scheduling strategy, an edge motion strategy, a thorn type scheduling strategy, a left-right interactive getting rid of the trapping strategy, an edge-first scheduling strategy, a door opening exploration scheduling strategy, a surrounding exploration scheduling strategy, and at least one of the trapping scenes corresponding to the sensing information edge strategy is a long and thin article such as a robot trapping table and a chair leg, the trapping scene corresponding to the front-back interactive obstacle crossing strategy is a robot trapping threshold trapping scene, the trapping scene corresponding to the layered expansion map path searching and scheduling strategy is a trapping scene of a robot bed or a bottom-side interactive obstacle crossing strategy, the trapping scene is a robot or a trapping scene corresponding to the small area general purpose, the trapping scene is a robot is a trapping scene corresponding to the left-side channel, and a surrounding access strategy is a common robot, and a surrounding access strategy is a common door opening.
Specifically, in some embodiments, the operation of getting rid of poverty corresponding to the perception information edge policy includes: receiving information of a sensing sensor to perform short-distance edgewise movement; responding to the fact that the robot breaks away from the current communication domain within a first preset time interval, successfully getting rid of the robot, and setting the current communication domain as a forbidden zone in a planning map; and/or, the operation of getting rid of poverty corresponding to the front-back interactive obstacle crossing strategy comprises the following steps: repeatedly and sequentially driving a first driving wheel of the robot to retreat with a first step length, a second driving wheel of the robot to advance with a second step length, the first driving wheel to retreat with a third step length and driving the second driving wheel to advance with a fourth step length within a second preset time interval so as to get rid of the robot; and/or, the getting rid of poverty operation corresponding to the hierarchical expansion map path search scheduling strategy comprises the following steps: obtaining information of a perception sensor to construct a local map; carrying out layered expansion on the local map constructed by the sensing sensor information until a scheduling path is searched, wherein a scheduling target point in the scheduling path is an arbitrary point outside the current communication domain; and performing a getting rid of poverty operation based on the dispatching path so as to get rid of poverty of the robot.
Receiving the information of the sensing type sensor to perform close-range edgewise movement by the aid of the escape operation corresponding to the sensing information edgewise strategy, wherein for example, when the trapping state of the robot is clamped by an elongated article such as a table, a chair leg and the like, the robot ignores collision and extrusion information of a collision plate in the process of performing close-range edgewise movement according to the received information of the sensing type sensor. Responding to that the robot breaks away from the current connected domain within a first preset time interval, successfully getting rid of the robot, setting the connected domain as a forbidden zone in a planning map, for example, the robot breaks away from the connected domain where the current robot is located within a preset time interval, successfully getting rid of the robot, and setting the connected domain at the position as the forbidden zone on the planning map; if the robot is not separated from the communication domain where the current robot is located within a specified time interval, the robot is in a trapped state after the robot fails to get trapped.
The operation of getting rid of poverty corresponding to the back-and-forth interactive obstacle surmounting strategy is repeated in a second preset time interval, wherein the first driving wheel of the robot is driven to back by a first step length, the second driving wheel of the robot is driven to advance by a second step length, the first driving wheel is driven to back by a third step length, and the second driving wheel is driven to advance by a fourth step length, so that the robot gets rid of poverty, for example, when the trapped state of the robot is clamped by a threshold, the left driving wheel is driven to move by the step length in one obstacle surmounting periodBack, right drive wheel with step/>Advancing, then right driving wheel with step/>Back, left drive wheel with step/>Advancing and getting rid of the trouble to reciprocate in this way, wherein/>Less than/>,/>Less than/>The specific numerical value needs to be determined according to the actual situation. Further, if the robot is separated from the communication domain where the current robot is located within a specified time interval, the robot gets rid of the trouble successfully, and the communication domain is set as a forbidden zone on a planning map; if the robot is not separated from the communication domain where the current robot is located within a specified time interval, the robot is still in a trapped state after the robot fails to get trapped.
Obtaining sensing type sensor information to construct a local map, and performing hierarchical expansion on the sensing type sensor information to construct the local map until a scheduling path is searched, wherein a scheduling target point in the scheduling path is any point outside a communication domain where a robot is currently located, for example, when a trapped state of the robot is trapped in a small-area general scene, in the process of getting rid of the trapped state, the robot constructs the local map according to the received sensing type sensor informationPair/>Performing layered expansion until a scheduling path is searched, wherein a scheduling target point is any free point outside a communication domain where the robot is located, so that the robot performs escaping motion based on the scheduling path, namely, in the scheduling process, when a left collision plate is triggered, a left collision plate fixing action is performed, namely, a right direction is bypassed, when a right collision plate is triggered, a right collision plate fixing action is performed, namely, a left direction is bypassed, and after the collision plate fixing action is finished, re-planning is performed, so that the robot is escaping, wherein a layered expansion map means/>The expansion coefficient of the (d) is sequentially decreased from the radius of the robot to 0 according to the coefficient delta. If the robot is separated from the communication domain where the current robot is located within a specified time interval, the robot gets rid of the trouble successfully, and the communication domain is set as a forbidden zone on a planning map; if the robot is not separated from the communication domain where the current robot is located within a specified time interval, the robot is still in a trapped state after the robot fails to get trapped.
In the embodiment, the local map is constructed by acquiring the information of the sensing type sensor, and the local map is layered and expanded until the scheduling path is searched, wherein the scheduling target point in the scheduling path is any point outside the communication domain where the robot is currently located, and the getting-out operation is performed based on the scheduling path, so that the getting-out of the robot is realized, and the getting-out direction is reasonably selected for the scheduling getting-out strategy.
In some embodiments, the operation of getting rid of poverty corresponding to the edgewise motion strategy comprises: receiving information of a sensing sensor to perform collision-free edgewise movement; responding to the fact that the robot breaks away from the current communication domain within a third preset time interval, and successfully getting rid of the robot; and/or, the escaping operation corresponding to the sprint type scheduling strategy comprises the following steps: clearing obstacles with height difference information on a planning map; the robot is retreated to a preset distance, and scheduling planning is carried out based on the emptied planning map, so that the robot gets rid of the problem; and/or the operation of getting rid of poverty corresponding to the left-right interactive getting rid of poverty strategy comprises the following steps: determining the escaping direction of a narrow area in the information of the sensing sensor, and enabling the left driving mechanism and the right driving mechanism of the robot to alternately rotate in opposite directions so as to enable the robot to escape; and/or the operation of getting rid of poverty corresponding to the edge-first and dispatch-later strategy comprises the following steps: the robot performs preset edge movement; after the preset edgewise movement is completed, the robot fails to separate from the current connected domain, and collision or downward-looking obstacle information on the planning map is cleared; and scheduling and planning based on the emptied planning map so as to get rid of the robot.
Receiving sensing type sensor information to perform collision-free edge movement, responding to the fact that the robot is separated from the current communication domain within a third preset time interval, and successfully removing the robot, for example, when the trapped state of the robot is a bed bottom or a sofa bottom, performing collision-free edge movement by the robot according to the received sensing type sensor information in the process of removing the robot, and if the robot is closed along the edge or can be separated from the current communication domain of the robot within a specified time interval, successfully removing the robot; if the robot is closed along the edge or can not be separated from the current communication domain of the robot within a specified time interval, the robot is still in a trapped state after the robot fails to get stuck.
The method comprises the steps of emptying obstacles with height difference information on a planning map, backing the robot to a preset distance, and carrying out scheduling planning based on the emptied planning map so as to enable the robot to get rid of the encumbrance, for example, when the encumbrance state of the robot is that a kitchen or a bathroom is encumbrance, emptying the identified threshold information or the obstacles with the height difference information on the planning map, backing the robot to the preset distance, and carrying out scheduling planning so as to enable the robot to advance in a better acceleration motion. If the robot can schedule to leave the current connected domain, the robot is successfully trapped, and the emptied barrier information is recovered on the planning map; if the robot can not be scheduled to be separated from the current connected domain, the robot is still in a trapped state after the robot fails to get stuck.
The left-right interaction type getting rid of the trapping strategy corresponds to the operation, the getting rid of the trapping direction of the narrow area in the perception type sensor information is determined, the left driving mechanism and the right driving mechanism of the robot alternately rotate towards opposite directions, so that the robot gets rid of the trapping, for example, when the trapping state of the robot is that a narrow channel is trapped, in the getting rid of the trapping process, the robot obtains the getting rid of direction of the narrow area according to the received perception type sensor information, and the left driving mechanism and the right driving mechanism alternately rotate towards opposite directions. If the robot is separated from the current communication domain within a specified time interval, the robot is successfully separated; if the robot is not separated from the current connected domain within a specified time interval, the robot is still in a trapped state after the robot fails to get stuck.
The method comprises the steps that firstly, a robot performs preset edgewise movement, after the preset edgewise movement is completed, the robot fails to separate from a current connected domain, collision or downward-looking obstacle information on a planning map is cleared, scheduling planning is performed based on the cleared planning map, so that the robot gets rid of the encumbrance, for example, when a trapped state of the robot is a trapped state of a middle-area general scene, in the process of getting rid of the encumbrance, the robot performs edgewise movement firstly, when the robot fails to separate from the current connected domain under the condition of edgewise closing, the robot performs scheduling and getting rid of the encumbrance, specifically, the scheduling and getting rid of the collision or downward-looking obstacle information on the planning map is cleared, whether a planning path exists is judged, and if the planning path does not exist, the robot is considered to be trapped; if a planning path exists, scheduling, wherein collision is performed, collision fixing action is performed when collision occurs, for example, right-hand movement is performed when a left collision plate is triggered, left-hand movement is performed when a right collision plate is triggered, re-planning is performed, and the number of re-planning times n is greater than or equal toThe robot cannot break away from the connected domain, and the robot scheduling fails; otherwise, the scheduling is successful, and the emptied obstacle information is recovered.
In some embodiments, the escaping operation corresponding to the door opening exploration scheduling policy includes: performing single scheduling to get rid of poverty; responding to single scheduling failure, scheduling the robot to any point closest to the door in the current communication domain, and updating door area obstacle information to judge whether a scheduling path exists to deviate from the current communication domain; scheduling planning is carried out based on the scheduling path so as to get rid of the robot; and/or, the escaping operation corresponding to the surrounding exploration scheduling strategy comprises the following steps: performing single scheduling to get rid of poverty; responding to single scheduling failure, selecting a preset number of scheduling points in the current connected domain, wherein the scheduling points are uniformly distributed and encircle the connected domain; scheduling is sequentially performed based on the scheduling points to determine a scheduling path, so that the robot gets rid of the trouble.
The method comprises the steps of exploring a getting rid of poverty operation corresponding to a dispatching strategy by a door opening, conducting single dispatching and getting rid of poverty, responding to single dispatching and getting rid of poverty failure, dispatching the robot to any point closest to the door in a current connected domain, updating door area obstacle information to judge whether a dispatching path exists to get rid of the current connected domain, and conducting dispatching planning based on the dispatching path so as to get rid of poverty of the robot. For example, when the trapped state of the robot is that a room is trapped, in the trapping process, single scheduling is performed first to get rid of the trapping, and if the single trapping scheduling is successful, the robot is considered to get rid of the trapping successfully. If the single escape dispatch fails, the robot is dispatched to a free point closest to a door in the connected domain, and the robot rotates in situ for one circle to update the barrier information in the door region, so as to judge whether a dispatch path exists to escape from the connected domain, if the dispatch path exists, the escape of the robot is successfully planned, and if the dispatch path does not exist, the escape of the robot fails.
The method comprises the steps of surrounding the escaping operation corresponding to the explore scheduling strategy, conducting single scheduling escaping, responding to single scheduling escaping failure, selecting a preset number of scheduling points in the current connected domain of the robot, uniformly distributing the scheduling points around the connected domain, and scheduling sequentially based on the scheduling points to determine a scheduling path, so that the robot is escaping. For example, when the trapped state of the robot is trapped in a large-area general scene, in the process of getting rid of the trapped state, single scheduling is firstly carried out, and if the single scheduling is successful, the robot is considered to be successful; if the single scheduling fails, N scheduling points are selected from the connected domain where the robot is located (the scheduling points are uniformly distributed around the connected domain for one circle), scheduling is sequentially carried out, and if the robot rotates for one circle in situ after each scheduling point is reached, whether a scheduling path exists to separate from the connected domain is judged, if the scheduling path exists, the scheduling planning of the robot is carried out, and the scheduling is successfully carried out. If all the dispatching points are dispatched, and a dispatching path does not exist, the robot is free from getting trapped and fails.
In order to facilitate understanding, the procedure of the escaping processing of the robot according to an embodiment of the present application is illustrated in fig. 2, and fig. 2 is a schematic flow chart of the escaping processing of the robot according to an embodiment of the present application.
Step S201: and the robot is trapped to detect, so as to judge whether the robot is trapped, for example, when the user starts to execute the task issued, the robot is trapped to detect, and whether the robot is trapped is judged.
One of the judgment results of step S201, the robot is not trapped, and step S208 is performed.
Step S202 is executed after the robot is trapped in the second judgment result of step S201.
Step S202: judging whether a history backtracking path exists, for example, judging whether two endpoints exist in a global diagram of the search history backtracking path, wherein the two endpoints are respectively positioned in a connected domain and a non-connected domain, and a connecting edge exists between the two endpoints.
One of the judgment results of step S202, there is no history backtracking path, and step S205 is executed.
Step S203 is executed when there is a history trace-back path in the second judgment result of step S202.
Step S203: and determining a historical path getting rid of poverty strategy, namely the robot uses the historical path getting rid of poverty strategy to perform getting rid of poverty operation.
Step S204: judging whether the escape is successful.
One of the judgment results in step S204, the robot gets rid of the fatigue successfully, and step S208 is executed.
Step S205 is executed after the robot fails to get stuck as a result of the determination in step S204.
Step S205: and identifying the trapped scenes, namely identifying and classifying the trapped scenes of the robot.
Step S206: and determining a scene escape strategy, for example, determining the scene escape strategy based on the scene recognition result, and further performing escape operation by the robot by using the scene escape strategy.
Step S207: judging whether the escape is successful.
One of the judgment results in step S207, the robot gets rid of the fatigue successfully, and step S208 is executed.
And step S207, judging that the robot fails to get rid of the trouble, and ending the task issued by the user.
Step S208: the corresponding task is executed, for example, the robot continues to execute the user-issued task.
Step S209: and judging whether the task is ended.
And one of the judgment results in the step S209, ending the task and determining that the task issued by the user is completed.
Step S201 is executed without ending the task as a result of the determination in step S209.
Referring to fig. 3, fig. 3 is a schematic structural diagram of a robot according to an embodiment of the application. The robot 30 includes a memory 31 and a processor 32 coupled to each other, and the processor 32 is configured to execute program instructions stored in the memory 31 to implement the steps of the embodiment of the method for escaping treatment of the robot described above. In one particular implementation, robots 30 may include, but are not limited to: intelligent mowers, service robots, sweeping robots, etc., are not limited herein.
Specifically, the processor 32 is configured to control itself and the memory 31 to implement the steps of the embodiment of the method for processing a robot. The processor 32 may also be referred to as a CPU (Central Processing Unit ), and the processor 32 may be an integrated circuit chip with signal processing capabilities. The Processor 32 may also be a general purpose Processor, a digital signal Processor (DIGITAL SIGNAL Processor, DSP), an Application SPECIFIC INTEGRATED Circuit (ASIC), a Field-Programmable gate array (Field-Programmable GATE ARRAY, FPGA) or other Programmable logic device, a discrete gate or transistor logic device, a discrete hardware component. A general purpose processor may be a microprocessor or the processor may be any conventional processor or the like. In addition, the processor 32 may be commonly implemented by an integrated circuit chip.
Referring to fig. 4, fig. 4 is a schematic structural diagram of a non-volatile computer readable storage medium according to an embodiment of the present application. The non-transitory computer readable storage medium 40 is configured to store a computer program 401, which when executed by a processor, for example, the processor 32 in the embodiment of fig. 3, is configured to implement the steps of the embodiment of the method for a robot.
The foregoing description of various embodiments is intended to highlight differences between the various embodiments, which may be the same or similar to each other by reference, and is not repeated herein for the sake of brevity.
In the several embodiments provided by the present application, it should be understood that the disclosed methods and related devices may be implemented in other manners. For example, the above-described embodiments of related devices are merely illustrative, e.g., the division of modules or elements is merely a logical functional division, and there may be additional divisions of actual implementation, e.g., elements or components may be combined or integrated into another system, or some features may be omitted, or not performed. Alternatively, the coupling or direct coupling or communication disconnection between the illustrated or discussed elements may be through some interface, indirect coupling or communication disconnection of a device or element, electrical, mechanical, or other form.
In addition, each functional unit in the embodiments of the present application may be integrated in one processing unit, or each unit may exist alone physically, or two or more units may be integrated in one unit. The integrated units may be implemented in hardware or in software functional units.
The integrated units, if implemented in the form of software functional units and sold or used as stand-alone products, may be stored in a computer readable storage medium. Based on such understanding, the technical solution of the present application may be embodied in essence or a part contributing to the prior art or all or part of the technical solution in the form of a software product stored in a storage medium, including several instructions for causing a computer device (which may be a personal computer, a server, or a network device, etc.) or a processor (processor) to execute all or part of the steps of the methods of the embodiments of the present application. And the aforementioned storage medium includes: a usb disk, a removable hard disk, a Read-Only Memory (ROM), a random access Memory (RAM, random Access Memory), a magnetic disk, or an optical disk, or other various media capable of storing program codes.
Those skilled in the art will readily appreciate that many modifications and variations are possible in the device and method while maintaining the teachings of the application. Accordingly, the above disclosure should be viewed as limited only by the scope of the appended claims.

Claims (9)

1. The method for processing the escape of the robot is characterized by comprising the following steps:
The robot obtains the current state of the robot when executing the task;
In response to the current state being a trapped state, the robot determines a trapped policy;
The robot uses the getting rid of poverty strategy to get rid of poverty so as to continue to execute the task;
the method comprises the steps that a scene escaping strategy and a historical path escaping strategy are included, the scene escaping strategy is determined based on a trapped scene to which the trapped state belongs, the historical path escaping strategy is determined based on a historical backtracking path of the robot, the historical backtracking path of the robot is used for representing that connecting edges exist between two endpoints of a connected domain and a non-connected domain respectively in a global diagram of the historical backtracking path, the endpoints in the global diagram of the historical backtracking path are formed by key position points of the robot in the task executing process, the non-connected domain is the rest connected domains except the connected domain on a planning map, and the robot is located in the connected domain;
The robot uses the history path getting rid of poverty strategy to get rid of poverty, including:
acquiring a candidate backtracking path in a candidate escape path set of the robot, wherein the candidate escape path set comprises the historical backtracking path;
And executing point-by-point movement based on the candidate backtracking path by using a planning and scheduling method so as to lead the robot to get rid of the poverty.
2. The method of claim 1, wherein the step of determining the position of the substrate comprises,
The robot uses the escape strategy to escape so as to continue to execute the task, and the escape strategy comprises the following steps:
Performing a escaping operation by utilizing the history path escaping strategy;
responding to failure of escaping by utilizing the history path escaping strategy, and utilizing the scene escaping strategy to conduct escaping operation so as to continue executing the task;
and continuing to execute the task in response to successful escape using the historical path escape policy.
3. The method according to claim 1 or 2, wherein the obtaining the current state of the robot when performing a task comprises:
Obtaining a displacement amount of a preset unit time at the current moment of the robot;
responding to the displacement amount being smaller than a preset displacement value, and determining the current state of the robot as a trapped state;
responding to the displacement amount being larger than a preset displacement value, and acquiring the current position of the robot and a dispatch target point;
and determining that the current state of the robot is a trapped state in response to the current position of the robot and the dispatch-target point not being in the same communication domain.
4. The method of claim 1, wherein performing point-wise motion based on the candidate backtracking path to cause the robot to get rid of the poverty comprises:
in response to collision of the robot with an obstacle in the point-by-point movement, performing a preset collision action;
in response to the robot not colliding with an obstacle in the point-wise motion, the robot is successfully escaped.
5. The method as recited in claim 4, further comprising:
Judging whether the robot is separated from the communication domain or not in response to completion of the preset collision action;
responding to the fact that the robot breaks away from the communicating domain, and successfully getting rid of the robot;
and in response to the robot not being separated from the connected domain, acquiring other candidate backtracking paths in the candidate escape path set of the robot, so as to execute the point-by-point movement to escape the robot.
6. The method of claim 1 or 2, wherein identifying the trapped scene comprises:
Acquiring the area of a communication domain where the robot is located;
Determining the trapped scene as a first trapped scene in response to the area being less than a first preset value;
Determining that the trapped scene is a second trapped scene in response to the area being greater than the first preset value and less than a second preset value;
and determining that the trapped scene is a third trapped scene in response to the area being greater than the second preset value.
7. The method of claim 1 or 2, wherein the scenario-based escape strategy comprises at least one of a perceived information edge strategy, a back-and-forth interactive obstacle-surmounting strategy, a hierarchical inflated map path search scheduling strategy, an edge motion strategy, a sprint scheduling strategy, a left-and-right interactive escape strategy, an edge-after-schedule strategy, a door opening exploration scheduling strategy, a wraparound exploration scheduling strategy;
the operation of getting rid of poverty corresponding to the perception information edge policy comprises the following steps:
Receiving information of a sensing sensor to perform short-distance edgewise movement;
Responding to the fact that the robot breaks away from a current communication domain within a first preset time interval, successfully getting rid of the robot, and setting the current communication domain as a forbidden zone in a planning map; and/or the number of the groups of groups,
The operation of getting rid of poverty corresponding to the front-back interactive obstacle crossing strategy comprises the following steps:
Repeatedly and sequentially driving a first driving wheel of the robot to retreat with a first step length, a second driving wheel of the robot to advance with a second step length, the first driving wheel to retreat with a third step length and driving the second driving wheel to advance with a fourth step length within a second preset time interval so as to get rid of the robot; and/or the number of the groups of groups,
The operation of getting rid of poverty corresponding to the hierarchical expansion map path search scheduling strategy comprises the following steps:
Obtaining information of a perception sensor to construct a local map;
Performing hierarchical expansion on the local map constructed by the perception sensor information until a scheduling path is searched, wherein a scheduling target point in the scheduling path is any point outside the current communication domain;
Performing a getting rid of poverty operation based on the scheduling path so as to get rid of poverty of the robot; and/or
The operation of getting rid of poverty corresponding to the edgewise motion strategy comprises the following steps:
receiving information of a sensing sensor to perform collision-free edgewise movement;
responding to the fact that the robot breaks away from the current communication domain within a third preset time interval, and successfully getting rid of the robot; and/or the number of the groups of groups,
The operation of getting rid of poverty corresponding to the sprint-type scheduling strategy comprises the following steps:
Clearing obstacles with height difference information on a planning map;
The robot is retreated to a preset distance, and scheduling planning is conducted on the basis of the emptied planning map, so that the robot gets rid of the problem; and/or the number of the groups of groups,
The operation of getting rid of poverty corresponding to the left-right interactive getting rid of poverty strategy comprises the following steps:
determining the escaping direction of a narrow area in the perception type sensor information, wherein the left driving mechanism and the right driving mechanism of the robot alternately rotate in opposite directions so as to enable the robot to escape; and/or the number of the groups of groups,
The operation of getting rid of poverty corresponding to the edge-first and dispatch-later strategy comprises the following steps:
the robot performs preset edgewise movement;
After the preset edgewise movement is completed, the robot fails to separate from the current connected domain, and collision or downward-looking obstacle information on a planning map is cleared;
Scheduling and planning based on the emptied planning map so as to get rid of the robot; and/or
The operation of getting rid of poverty corresponding to the door opening exploration scheduling strategy comprises the following steps:
performing single scheduling to get rid of poverty;
Responding to the single scheduling failure, scheduling the robot to any point closest to a door in a current communication domain, and updating door area obstacle information to judge whether a scheduling path exists to deviate from the current communication domain;
Scheduling planning is conducted based on the scheduling path so that the robot gets rid of the trouble; and/or the number of the groups of groups,
The operation of getting rid of poverty corresponding to the surrounding exploration scheduling strategy comprises the following steps:
performing single scheduling to get rid of poverty;
Responding to the single scheduling failure, selecting a preset number of scheduling points in the current communication domain, wherein the scheduling points are uniformly distributed and encircle the communication domain;
And scheduling is sequentially performed based on the scheduling points so as to determine a scheduling path, and then the robot is free from being trapped.
8. A robot comprising a memory and a processor coupled to each other, the processor being configured to execute program instructions stored in the memory to implement the method of the robot according to any one of claims 1 to 7.
9. A non-transitory computer readable storage medium having stored thereon program instructions, which when executed by a processor, implement the method of getting rid of poverty of a robot according to any of claims 1 to 7.
CN202410226556.6A 2024-02-29 2024-02-29 Method for processing escape of robot, robot and storage medium Active CN117796711B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202410226556.6A CN117796711B (en) 2024-02-29 2024-02-29 Method for processing escape of robot, robot and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202410226556.6A CN117796711B (en) 2024-02-29 2024-02-29 Method for processing escape of robot, robot and storage medium

Publications (2)

Publication Number Publication Date
CN117796711A CN117796711A (en) 2024-04-02
CN117796711B true CN117796711B (en) 2024-06-07

Family

ID=90434937

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202410226556.6A Active CN117796711B (en) 2024-02-29 2024-02-29 Method for processing escape of robot, robot and storage medium

Country Status (1)

Country Link
CN (1) CN117796711B (en)

Citations (13)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108469816A (en) * 2018-02-28 2018-08-31 北京奇虎科技有限公司 A kind of processing method of getting rid of poverty, device and the robot of robot
DE102018111509A1 (en) * 2018-05-14 2019-11-14 Hako Gmbh Method for determining a route for a floor cleaning machine
CN110448241A (en) * 2019-07-18 2019-11-15 广东宝乐机器人股份有限公司 The stranded detection of robot and method of getting rid of poverty
CN111736580A (en) * 2019-03-19 2020-10-02 北京奇虎科技有限公司 Obstacle avoidance and escaping method and device for sweeping equipment, electronic equipment and storage medium
CN113110498A (en) * 2021-05-08 2021-07-13 珠海市一微半导体有限公司 Robot escaping method based on single-point TOF
CN113885525A (en) * 2021-10-30 2022-01-04 重庆长安汽车股份有限公司 Path planning method and system for automatically driving vehicle to get rid of trouble, vehicle and storage medium
KR20220047002A (en) * 2020-10-08 2022-04-15 엘지전자 주식회사 Moving robot system
CN114578821A (en) * 2022-03-03 2022-06-03 杭州萤石软件有限公司 Mobile robot, method for overcoming difficulty of mobile robot, and storage medium
CN114763129A (en) * 2021-01-11 2022-07-19 宁波方太厨具有限公司 Cleaning robot wheel set trapped state identification method and walking system
CN115167448A (en) * 2022-07-29 2022-10-11 深圳优地科技有限公司 Robot escaping method and device, robot and storage medium
CN115951673A (en) * 2022-12-16 2023-04-11 深圳市优必选科技股份有限公司 Mobile device escaping method and device and robot
WO2023130766A1 (en) * 2022-01-05 2023-07-13 美的集团(上海)有限公司 Path planning method for robot, electronic device, and computer-readable storage medium
CN117032213A (en) * 2023-07-25 2023-11-10 杭州华橙软件技术有限公司 Method for escaping from trapping of movable robot, storage medium and movable robot

Family Cites Families (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR101626984B1 (en) * 2009-11-16 2016-06-02 엘지전자 주식회사 Robot cleaner and controlling method of the same
US11927965B2 (en) * 2016-02-29 2024-03-12 AI Incorporated Obstacle recognition method for autonomous robots

Patent Citations (13)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108469816A (en) * 2018-02-28 2018-08-31 北京奇虎科技有限公司 A kind of processing method of getting rid of poverty, device and the robot of robot
DE102018111509A1 (en) * 2018-05-14 2019-11-14 Hako Gmbh Method for determining a route for a floor cleaning machine
CN111736580A (en) * 2019-03-19 2020-10-02 北京奇虎科技有限公司 Obstacle avoidance and escaping method and device for sweeping equipment, electronic equipment and storage medium
CN110448241A (en) * 2019-07-18 2019-11-15 广东宝乐机器人股份有限公司 The stranded detection of robot and method of getting rid of poverty
KR20220047002A (en) * 2020-10-08 2022-04-15 엘지전자 주식회사 Moving robot system
CN114763129A (en) * 2021-01-11 2022-07-19 宁波方太厨具有限公司 Cleaning robot wheel set trapped state identification method and walking system
CN113110498A (en) * 2021-05-08 2021-07-13 珠海市一微半导体有限公司 Robot escaping method based on single-point TOF
CN113885525A (en) * 2021-10-30 2022-01-04 重庆长安汽车股份有限公司 Path planning method and system for automatically driving vehicle to get rid of trouble, vehicle and storage medium
WO2023130766A1 (en) * 2022-01-05 2023-07-13 美的集团(上海)有限公司 Path planning method for robot, electronic device, and computer-readable storage medium
CN114578821A (en) * 2022-03-03 2022-06-03 杭州萤石软件有限公司 Mobile robot, method for overcoming difficulty of mobile robot, and storage medium
CN115167448A (en) * 2022-07-29 2022-10-11 深圳优地科技有限公司 Robot escaping method and device, robot and storage medium
CN115951673A (en) * 2022-12-16 2023-04-11 深圳市优必选科技股份有限公司 Mobile device escaping method and device and robot
CN117032213A (en) * 2023-07-25 2023-11-10 杭州华橙软件技术有限公司 Method for escaping from trapping of movable robot, storage medium and movable robot

Also Published As

Publication number Publication date
CN117796711A (en) 2024-04-02

Similar Documents

Publication Publication Date Title
CN108209741B (en) Cleaning robot control method and cleaning robot
Bennewitz et al. Using EM to learn motion behaviors of persons with mobile robots
JP7283844B2 (en) Systems and methods for keyframe-based autonomous vehicle motion
US20190057314A1 (en) Joint processing for embedded data inference
CN109375618A (en) The navigation barrier-avoiding method and terminal device of clean robot
CN111459153B (en) Dynamic region division and region channel identification method and cleaning robot
KR100834571B1 (en) Apparatus and Method for building environment map according to optimum path setting in autonomous mobile robot
CN112135553B (en) Method and apparatus for performing cleaning operations
CN111381590A (en) Sweeping robot and route planning method thereof
WO2020005164A1 (en) Task management method and system thereof
KR20210079610A (en) Artificial intelligence cleaning robot and method thereof
Huang et al. An online multi-lidar dynamic occupancy mapping method
CN117796711B (en) Method for processing escape of robot, robot and storage medium
EP3955082A1 (en) Computer-implemented method and device for controlling a mobile robot based on semantic environment maps
CN108820633A (en) Mobile dustbin and its localization method, sanitation equipment localization method
CN113753078B (en) Obstacle track prediction method and device, electronic equipment and automatic driving vehicle
CN116211168A (en) Operation control method and device of cleaning equipment, storage medium and electronic device
CN117032213A (en) Method for escaping from trapping of movable robot, storage medium and movable robot
CN112116630A (en) Target tracking method
CN117993422A (en) Robot character stream distribution method and logistics distribution robot based on natural language
CN117009834A (en) Threshold area identification method and device, medium and electronic equipment
CN114652217A (en) Control method, cleaning robot, and storage medium
US20220004806A1 (en) Method and device for creating a machine learning system
CN116523952A (en) Estimating 6D target pose using 2D and 3D point-by-point features
CN114435401A (en) Vacancy recognition method, vehicle and readable storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant