CN108256430B - Obstacle information acquisition method and device and robot - Google Patents
Obstacle information acquisition method and device and robot Download PDFInfo
- Publication number
- CN108256430B CN108256430B CN201711384835.1A CN201711384835A CN108256430B CN 108256430 B CN108256430 B CN 108256430B CN 201711384835 A CN201711384835 A CN 201711384835A CN 108256430 B CN108256430 B CN 108256430B
- Authority
- CN
- China
- Prior art keywords
- robot
- entity
- coordinate system
- dimensional
- coordinate
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active
Links
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V20/00—Scenes; Scene-specific elements
- G06V20/50—Context or environment of the image
- G06V20/56—Context or environment of the image exterior to a vehicle by using sensors mounted on the vehicle
- G06V20/58—Recognition of moving objects or obstacles, e.g. vehicles or pedestrians; Recognition of traffic objects, e.g. traffic signs, traffic lights or roads
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/70—Determining position or orientation of objects or cameras
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/10—Image acquisition modality
- G06T2207/10028—Range image; Depth image; 3D point clouds
Landscapes
- Engineering & Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Theoretical Computer Science (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Multimedia (AREA)
- Manipulator (AREA)
- Control Of Position, Course, Altitude, Or Attitude Of Moving Bodies (AREA)
Abstract
The application discloses a method and a device for acquiring barrier information and a robot. The method comprises the following steps: obtaining a depth image of a current environment through a vision sensor; determining a first three-dimensional coordinate of the entity based on a visual sensor coordinate system according to the depth information; the visual sensor coordinate system is a coordinate system taking the visual sensor as an origin; converting the first three-dimensional coordinates of the entity into second three-dimensional coordinates based on a robot coordinate system; the robot coordinate system is a coordinate system obtained by taking one point in the robot as an origin; projecting the second three-dimensional coordinate of the entity onto a two-dimensional grid map to obtain a two-dimensional coordinate of the entity; the two-dimensional grid map is superposed with the plane of the bottom surface of the robot; and acquiring the position relation between the entity and the robot according to the two-dimensional coordinates. The method and the device solve the technical problem that the robot cannot accurately judge the obstacle due to the adoption of the infrared sensor and the existing obstacle analysis method.
Description
Technical Field
The application relates to the technical field of robots, in particular to a method and a device for acquiring obstacle information and a robot.
Background
With the improvement of living standard, more and more people desire more personal time in order to enjoy life. However, the fast paced, heavy housework takes a lot of personal time for people. With the development of science and technology, the mobile robot can gradually replace human beings to undertake simple and repeated physical labor. Many of these robots have the ability of autonomous walking movement, and the moving space of these robots often has various obstacles (walls, furniture, etc.), and the robot inevitably collides with these obstacles, so the service life of the robot will be greatly reduced.
At present, the robot adopts sensors such as ultrasonic wave and infrared, and the obstacle is avoided by measuring the distance between the advancing direction and the obstacle. The method has better effect on obstacle avoidance, but the sensor has defects: for ultrasonic waves, the speed of sound is easily disturbed by temperature and wind direction, sound is absorbed by a sound absorbing surface, and the like; for infrared, the minimum detection distance is too large. These all affect the judgment that the robot cannot accurately judge the obstacle.
Aiming at the problem that the robot cannot accurately judge the obstacle in the related technology, an effective solution is not provided at present.
Disclosure of Invention
The application mainly aims to provide an obstacle information acquisition method, an obstacle information acquisition device and a robot, so as to solve the problem of accurate judgment of the robot.
In order to achieve the above object, according to an aspect of the present application, there is provided an obstacle information acquiring method including:
obtaining a depth image of a current environment through a visual sensor, wherein the depth image provides depth information of an entity in the current environment and the visual sensor;
determining a first three-dimensional coordinate of the entity based on a visual sensor coordinate system according to the depth information; the visual sensor coordinate system is a coordinate system taking the visual sensor as an origin;
converting the first three-dimensional coordinates of the entity into second three-dimensional coordinates based on a robot coordinate system; the robot coordinate system is a coordinate system obtained by taking one point in the robot as an origin;
projecting the second three-dimensional coordinate of the entity onto a two-dimensional grid map to obtain a two-dimensional coordinate of the entity; the two-dimensional grid map is superposed with the plane of the bottom surface of the robot;
and acquiring the position relation between the entity and the robot according to the two-dimensional coordinates.
Further, as the aforementioned obstacle information acquiring method,
the robot coordinate system obtained by taking one point in the robot as an origin comprises: the horizontal plane where the lowest point of the robot is located is an x-y axis plane, the central vertical line of the robot is used as a z axis, the intersection point of the z axis and the x-y axis plane is used as the origin of a robot coordinate system, and the positive direction of the y axis is the front of the robot.
Further, according to the obstacle information acquiring method, the first three-dimensional coordinates of the entity include a plurality of three-dimensional coordinate points for embodying the shape characteristics of the entity.
Further, the method for acquiring obstacle information as described above, after converting the first three-dimensional coordinate of the entity into the second three-dimensional coordinate, further includes:
obtaining the height of the entity according to the second three-dimensional coordinate;
if the highest point in the second three-dimensional coordinate of the entity is lower than the height of the robot, the entity is judged to be an obstacle;
determining the entity as passable through the passage if a lowest point in the second three-dimensional coordinates of the entity is higher than the height of the robot.
Further, the method for acquiring obstacle information as described above, after obtaining the two-dimensional coordinates of the entity, further includes:
the obstacles and traversable paths are distinctively labeled in the two-dimensional grid map.
Further, the method for acquiring obstacle information as described above, the converting the first three-dimensional coordinate of the entity into the second three-dimensional coordinate includes:
acquiring the position of the vision sensor on the robot to obtain a transformation matrix of the vision sensor coordinate system relative to the robot coordinate system; what is needed isThe transformation matrix is:where R is a 3 × 3 rotation matrix and t is a 3 × 1 transition matrix.
Further, according to the foregoing obstacle information obtaining method, R is:whereinRespectively are regression values of an x value, a y value and a z value in a coordinate system of the vision sensor;which are the regression values of the x, y and z values in the robot coordinate system, respectively.
Further, as the aforementioned obstacle information acquiring method,
the t is as follows:wherein Δ x, Δ y, and Δ z are offset distances of the origin of the vision sensor coordinate system with respect to the origin of the robot coordinate system in three directions of x, y, and z of the robot coordinate system, respectively.
In order to achieve the above object, according to another aspect of the present application, there is provided an obstacle information acquiring apparatus including:
a depth information acquisition module: the system comprises a visual sensor, a processor and a display, wherein the visual sensor is used for acquiring a depth image of a current environment, and the depth image provides depth information of an entity in the current environment and the visual sensor;
a first three-dimensional coordinate acquisition module: the first three-dimensional coordinate of the entity in the current environment is obtained according to the depth information; the first three-dimensional coordinates are: three-dimensional coordinates of the entity in a vision sensor coordinate system; the vision sensor coordinate system is as follows: a coordinate system with the visual sensor as an origin;
a coordinate conversion module: for converting the first three-dimensional coordinates of the entity into second three-dimensional coordinates; the second three-dimensional coordinates are: three-dimensional coordinates of the entity in a robot coordinate system; the robot coordinate system is as follows: a coordinate system with a point in the robot as an origin;
the grid map generation module is used for projecting the second three-dimensional coordinate of the entity onto a two-dimensional grid map to obtain a two-dimensional coordinate of the entity, and the two-dimensional coordinate of the entity is used as final barrier data; the two-dimensional grid map is superposed with the plane of the bottom surface of the robot;
and the obstacle distance generation module is used for acquiring the horizontal distance and the orientation between the entity and the robot according to the final obstacle data.
In order to achieve the above object, according to another aspect of the present application, there is provided a robot including a robot body and a vision sensor provided on the robot.
In the embodiment of the present application, a method for obtaining a depth image of a current environment by using a visual sensor and processing the depth image to obtain obstacle information includes: obtaining a depth image of a current environment through a visual sensor, wherein the depth image provides depth information of an entity in the current environment and the visual sensor; determining a first three-dimensional coordinate of the entity based on a visual sensor coordinate system according to the depth information; the visual sensor coordinate system is a coordinate system taking the visual sensor as an origin; converting the first three-dimensional coordinates of the entity into second three-dimensional coordinates based on a robot coordinate system; the robot coordinate system is a coordinate system obtained by taking one point in the robot as an origin; projecting the second three-dimensional coordinate of the entity onto a two-dimensional grid map to obtain a two-dimensional coordinate of the entity; the two-dimensional grid map is superposed with the plane of the bottom surface of the robot; and acquiring the position relation between the entity and the robot according to the two-dimensional coordinates. The technical effect of accurately obtaining the information of the obstacles is achieved, and the technical problem that the robot cannot accurately judge the obstacles due to the adoption of the infrared sensor and the conventional obstacle analysis method is solved.
Drawings
The accompanying drawings, which are incorporated in and constitute a part of this application, serve to provide a further understanding of the application and to enable other features, objects, and advantages of the application to be more apparent. The drawings and their description illustrate the embodiments of the invention and do not limit it. In the drawings:
fig. 1 is a schematic flow chart of an obstacle information acquiring method according to an embodiment of the present application;
fig. 2 is a schematic flowchart of an obstacle information acquiring method according to step S3 in fig. 1;
fig. 3 is a block diagram of an obstacle information acquiring apparatus according to an embodiment of the present application;
fig. 4 illustrates a method for determining a ground plane parameter according to an embodiment of the present application.
Detailed Description
In order to make the technical solutions better understood by those skilled in the art, the technical solutions in the embodiments of the present application will be clearly and completely described below with reference to the drawings in the embodiments of the present application, and it is obvious that the described embodiments are only partial embodiments of the present application, but not all embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present application.
It should be noted that the terms "first," "second," and the like in the description and claims of this application and in the drawings described above are used for distinguishing between similar elements and not necessarily for describing a particular sequential or chronological order. It should be understood that the data so used may be interchanged under appropriate circumstances such that embodiments of the application described herein may be used. Furthermore, the terms "comprises," "comprising," and "having," and any variations thereof, are intended to cover a non-exclusive inclusion, such that a process, method, system, article, or apparatus that comprises a list of steps or elements is not necessarily limited to those steps or elements expressly listed, but may include other steps or elements not expressly listed or inherent to such process, method, article, or apparatus.
In this application, the terms "upper", "lower", "left", "right", "front", "rear", "top", "bottom", "inner", "outer", "middle", "vertical", "horizontal", "lateral", "longitudinal", and the like indicate orientations or positional relationships based on the orientations or positional relationships shown in the drawings. These terms are used primarily to better describe the invention and its embodiments, and are not intended to limit the indicated devices, elements or components to a particular orientation or to be constructed and operated in a particular orientation.
Moreover, some of the above terms may be used to indicate other meanings besides the orientation or positional relationship, for example, the term "on" may also be used to indicate some kind of attachment or connection relationship in some cases. The specific meaning of these terms in the present invention can be understood by those of ordinary skill in the art as appropriate.
Furthermore, the terms "mounted," "disposed," "provided," "connected," and "sleeved" are to be construed broadly. For example, it may be a fixed connection, a removable connection, or a unitary construction; can be a mechanical connection, or an electrical connection; may be directly connected, or indirectly connected through intervening media, or may be in internal communication between two devices, elements or components. The specific meaning of the above terms in the present invention can be understood according to specific situations by those skilled in the art.
It should be noted that the embodiments and features of the embodiments in the present application may be combined with each other without conflict. The present application will be described in detail below with reference to the embodiments with reference to the attached drawings.
As shown in fig. 1, the present invention provides an obstacle information acquiring method, including the following steps S1 to S5:
s1, obtaining a depth image of a current environment through a visual sensor, wherein the depth image provides depth information of an entity in the current environment and the visual sensor;
s2, determining a first three-dimensional coordinate of the entity based on a visual sensor coordinate system according to the depth information; the visual sensor coordinate system is a coordinate system taking the visual sensor as an origin;
s3, converting the first three-dimensional coordinate of the entity into a second three-dimensional coordinate based on a robot coordinate system; the robot coordinate system is a coordinate system obtained by taking one point in the robot as an origin;
s4, projecting the second three-dimensional coordinate of the entity onto a two-dimensional grid map to obtain a two-dimensional coordinate of the entity; the two-dimensional grid map is superposed with the plane of the bottom surface of the robot;
and S5, acquiring the position relation between the entity and the robot according to the two-dimensional coordinates.
According to an embodiment of the present invention, there is provided a method of the robot coordinate system, the method including:
the robot coordinate system obtained by taking one point in the robot as an origin comprises: the horizontal plane where the lowest point of the robot is located is an x-y axis plane, the central vertical line of the robot is used as a z axis, the intersection point of the z axis and the x-y axis plane is used as the origin of a robot coordinate system, and the positive direction of the y axis is the front of the robot.
In some embodiments, the obstacle information acquiring method as described above, the first three-dimensional coordinates of the entity include a plurality of three-dimensional coordinate points for characterizing a shape of the entity.
According to an embodiment of the present invention, there is provided an obstacle information acquiring method, as shown in fig. 2, after converting a first three-dimensional coordinate of the entity into a second three-dimensional coordinate, the method further includes:
s61, obtaining the height of the entity according to the second three-dimensional coordinate;
s61, if the highest point in the second three-dimensional coordinate of the entity is lower than the height of the robot, the entity is judged to be an obstacle;
s62, if the lowest point in the second three-dimensional coordinates of the entity is higher than the height of the robot, the entity is judged to be capable of passing through the channel.
Preferably, the method uses a grid map with a resolution of 500 × 500 to represent a real 10 × 10 m space. The black cells are considered obstacles, representing the presence of an object below the height of the robot, and the white cells are the opposite. The use of such a grid map results in available obstacle data. Since a human target is regarded as an obstacle by simply using depth data, the grid of the region occupied by the skeleton is always regarded as white by referring to the skeleton data, so that the obstacle data is corrected, and the target and the obstacle are distinguished.
In some embodiments, the method for acquiring obstacle information as described above, after obtaining the two-dimensional coordinates of the entity, further includes:
the obstacles and traversable paths are distinctively labeled in the two-dimensional grid map.
According to an embodiment of the present invention, there is provided an obstacle information acquiring method for converting a first three-dimensional coordinate of an entity into a second three-dimensional coordinate, including:
acquiring the position of the vision sensor on the robot to obtain a transformation matrix of the vision sensor coordinate system relative to the robot coordinate system; the transformation matrix is:where R is a 3 × 3 rotation matrix and t is a 3 × 1 transition matrix.
Specifically, the method for obtaining the transformation matrix is as follows:
the vision sensor may acquire depth data that stores three-dimensional information of points in the surrounding environment. Since the vision sensor is mounted on the robot, a significant portion of the points in the depth data set will be located on the ground plane, with the remaining portion of the points in the three-dimensional space. Based on the characteristics, the conversion relation of the coordinate system of the vision sensor relative to the coordinate system of the ground plane, namely a conversion matrix, can be obtained by obtaining the parameter of the ground plane, then selecting any space point and projecting the space point to the ground plane. Since the robot is always on the ground plane, the ground plane coordinate system is selected as the robot coordinate system.
Fig. 4 shows a method for determining the ground plane parameters:
the first step is as follows: randomly selecting 3 non-collinear data from the depth data Q, and then obtaining plane parameters by using the principle that three points determine a plane.
Let three points in the R3 space that are not collinear: m is1(x1,y1,z1),m2(x2,y2,z2),m3(x3,y3,z3) The plane pi equation determined by these three points is:
wherein M (x, y, z) is ∈ R3. It is written in the form of a general equation:
Ax+By+Cz+D=0
resulting in a plane parameter A, B, C, D.
The second step is that: using the point-to-plane distance formula:
and (3) calculating the distance between all data in the depth data set Q and the plane pi, and if the distance is less than an initial threshold value H, adding 1 to the number of elements marked as P in the effective data set.
The third step: the second step is circulated, if the number of the elements P of the current effective data set is larger than the initially set value N, the iteration is ended, the sampling is stopped, and a plane model is constructed by adopting the current parameter ABCD; if all the points after the iteration still cannot satisfy the threshold value of N, the fourth step is executed.
The fourth step: if the iteration times are more than K, exiting; otherwise, adding 1 to the iteration number, and repeating the steps.
And after a plane equation is obtained, the conversion relation of the vision sensor coordinate system relative to the plane coordinate system is obtained.
First, a planar coordinate system is established. A projection method is used for determining a plane coordinate system, points on the axis of the coordinate system of the visual sensor are projected to the plane coordinate system according to the characteristic that the geometric relation of corresponding points under different coordinate systems is unchanged, and a new coordinate axis is constructed by using the projected points.
The projection process is described by the idea of vector, and a space point P is set0(X, Y, Z), and the spatial plane π (P)p,Np) Easily find PoThe distances to the plane are:
D=((Po-Pp)×Np)
will PoMoving D in the opposite direction along the normal vector yields PoProjected point on the plane:
P=Po-Np×(Po-p)×Np)
the specific method comprises the following steps:
the first step is as follows: projecting the lower origin O (0,0,0) of the vision sensor coordinate system to the plane pi to obtain the lower origin O of the plane coordinate systemw(0,0,0)。
The second step is that: respectively projecting a point X (1,0,0) and a point Y (0,1,0) in a visual sensor coordinate system to a plane pi to obtain a point X in a plane coordinate systemwAnd Yw;
The third step: and solving the x axis and the y axis of the plane coordinate system. Can determine the projected point OwAnd point XwThe component vector constitutes the x-axis, but the y-axis may be one of two vectors in a plane perpendicular to the x-axis. Arbitrarily given a vertical vector as the y-axis, for that vector and point XwAnd point YwAnd (4) performing dot product on the formed vectors, and taking the initially given vector as a y axis if the result is greater than 0, and taking the inverse of the initially given vector as the y axis if the result is not greater than 0.
The fourth step: and obtaining a z-axis from a Cartesian coordinate system according to the obtained x-axis and y-axis.
The axis vector obtained in the above process constitutes a rotation matrix R of the plane coordinate system relative to the vision sensor coordinate system, and the projected point Ow constitutes a translation matrix t. Both together form the transformation matrix T.
According to an embodiment of the present invention, there is provided an obstacle information obtaining method, where R is:whereinRegression values of an x value, a y value and a z value in a vision sensor coordinate system respectively;regression values for x, y and z values in the robot coordinate system, respectively.
According to an embodiment of the present invention, there is provided an obstacle information acquisition method,
the t is as follows:wherein Δ x, Δ y, and Δ z are offset distances of the origin of the vision sensor coordinate system with respect to the origin of the robot coordinate system in three directions of x, y, and z of the robot coordinate system, respectively.
In order to achieve the above object, according to another aspect of the present application, there is provided an obstacle information acquiring apparatus including:
depth information acquisition module 1: the system comprises a visual sensor, a processor and a display, wherein the visual sensor is used for acquiring a depth image of a current environment, and the depth image provides depth information of an entity in the current environment and the visual sensor;
the first three-dimensional coordinate acquisition module 2: the first three-dimensional coordinate of the entity in the current environment is obtained according to the depth information; the first three-dimensional coordinates are: three-dimensional coordinates of the entity in a vision sensor coordinate system; the vision sensor coordinate system is as follows: a coordinate system with the visual sensor as an origin;
the coordinate conversion module 3: for converting the first three-dimensional coordinates of the entity into second three-dimensional coordinates; the second three-dimensional coordinates are: three-dimensional coordinates of the entity in a robot coordinate system; the robot coordinate system is as follows: a coordinate system with a point in the robot as an origin;
the grid map generation module 4 is configured to project the second three-dimensional coordinate of the entity onto a two-dimensional grid map to obtain a two-dimensional coordinate of the entity, and use the two-dimensional coordinate of the entity as final obstacle data; the two-dimensional grid map is superposed with the plane of the bottom surface of the robot;
and the obstacle distance generation module 5 is used for acquiring the horizontal distance and the orientation between the entity and the robot according to the final obstacle data.
In order to achieve the above object, according to another aspect of the present application, there is provided a robot including a robot body and a vision sensor provided on the robot.
It will be apparent to those skilled in the art that the modules or steps of the present invention described above may be implemented by a general purpose computing device, they may be centralized on a single computing device or distributed across a network of multiple computing devices, and they may alternatively be implemented by program code executable by a computing device, such that they may be stored in a storage device and executed by a computing device, or fabricated separately as individual integrated circuit modules, or fabricated as a single integrated circuit module from multiple modules or steps. Thus, the present invention is not limited to any specific combination of hardware and software.
The above description is only a preferred embodiment of the present application and is not intended to limit the present application, and various modifications and changes may be made by those skilled in the art. Any modification, equivalent replacement, improvement and the like made within the spirit and principle of the present application shall be included in the protection scope of the present application.
Claims (9)
1. An obstacle information acquisition method, characterized by comprising:
obtaining, by a visual sensor, a depth image of a current environment, the depth image providing depth information of entities in the current environment;
determining a first three-dimensional coordinate of the entity based on a visual sensor coordinate system according to the depth information; the visual sensor coordinate system is a coordinate system taking the visual sensor as an origin; converting the first three-dimensional coordinates of the entity into second three-dimensional coordinates based on a robot coordinate system; according to the robot coordinate system obtained by taking one point in the robot as an origin, taking a horizontal plane where the lowest point of the robot is located as an x-y axis plane, taking a central vertical line of the robot as a z axis, taking an intersection point of the z axis and the x-y axis plane as the origin of the robot coordinate system, and taking the positive direction of the y axis as the right front of the robot;
projecting the second three-dimensional coordinate of the entity onto a two-dimensional grid map to obtain a two-dimensional coordinate of the entity; the two-dimensional grid map is superposed with the plane of the bottom surface of the robot;
determining the position relation between the entity and the robot according to the two-dimensional coordinates;
a grid map with 500 × 500 resolution is adopted to represent a real 10 × 10 m space, black cells are regarded as obstacles and represent objects lower than the height of the robot, and white cells are opposite; obtaining available obstacle data by using the grid map; by referring to the bone data, the squares of the area occupied by the bone are always considered white.
2. The obstacle information acquiring method according to claim 1, wherein the first three-dimensional coordinates of the entity include a plurality of three-dimensional coordinate points for embodying a shape characteristic of the entity.
3. The obstacle information acquiring method according to claim 2, further comprising, after converting the first three-dimensional coordinates of the entity into second three-dimensional coordinates:
obtaining the height of the entity according to the second three-dimensional coordinate;
if the highest point in the second three-dimensional coordinate of the entity is lower than the height of the robot, the entity is judged to be an obstacle;
determining the entity as passable through the passage if a lowest point in the second three-dimensional coordinates of the entity is higher than the height of the robot.
4. The obstacle information acquiring method according to claim 3, further comprising, after obtaining the two-dimensional coordinates of the entity:
the obstacles and traversable paths are distinctively labeled in the two-dimensional grid map.
5. The obstacle information acquiring method according to claim 1, wherein the converting the first three-dimensional coordinates of the entity into the second three-dimensional coordinates includes:
6. The obstacle information acquiring method according to claim 5, wherein R is:whereinRespectively are regression values of an x value, a y value and a z value in a coordinate system of the vision sensor;which are the regression values of the x, y and z values in the robot coordinate system, respectively.
7. The obstacle information acquiring method according to claim 5, wherein t is:wherein Δ x, Δ y, and Δ z are offset distances of the origin of the vision sensor coordinate system with respect to the origin of the robot coordinate system in three directions of x, y, and z of the robot coordinate system, respectively.
8. An obstacle information acquiring apparatus, characterized by comprising:
a depth information acquisition module: the system comprises a visual sensor, a processor and a display, wherein the visual sensor is used for acquiring a depth image of a current environment, and the depth image provides depth information of an entity in the current environment and the visual sensor;
a first three-dimensional coordinate acquisition module: the first three-dimensional coordinate of the entity in the current environment is obtained according to the depth information; the first three-dimensional coordinates are: three-dimensional coordinates of the entity in a vision sensor coordinate system; the vision sensor coordinate system is as follows: a coordinate system with the visual sensor as an origin; a coordinate conversion module: for converting the first three-dimensional coordinates of the entity into second three-dimensional coordinates; the second three-dimensional coordinates are: three-dimensional coordinates of the entity in a robot coordinate system; the robot coordinate system is as follows: a coordinate system taking one point in the robot as an origin, a horizontal plane where the lowest point of the robot is located is an x-y axis plane, a central vertical line of the robot is taken as a z axis, an intersection point of the z axis and the x-y axis plane is taken as the origin of the robot coordinate system, and the positive direction of the y axis is right in front of the robot;
the grid map generation module is used for projecting the second three-dimensional coordinate of the entity onto a two-dimensional grid map to obtain a two-dimensional coordinate of the entity, and the two-dimensional coordinate of the entity is used as final barrier data; the two-dimensional grid map is superposed with the plane of the bottom surface of the robot;
the obstacle distance generating module is used for acquiring the horizontal distance and the direction between the entity and the robot according to the final obstacle data;
a grid map with 500 × 500 resolution is adopted to represent a real space of 10 × 10 meters, black cells are regarded as obstacles and represent objects lower than the height of the robot, and white cells are adopted to obtain available obstacle data; by referring to the bone data, the squares of the area occupied by the bone are always considered white.
9. A robot is characterized by comprising a robot body and a vision sensor arranged on the robot;
the robot includes:
obtaining, by a visual sensor, a depth image of a current environment, the depth image providing depth information of entities in the current environment;
determining a first three-dimensional coordinate of the entity based on a visual sensor coordinate system according to the depth information; the visual sensor coordinate system is a coordinate system taking the visual sensor as an origin; converting the first three-dimensional coordinates of the entity into second three-dimensional coordinates based on a robot coordinate system; according to the robot coordinate system obtained by taking one point in the robot as an origin, taking a horizontal plane where the lowest point of the robot is located as an x-y axis plane, taking a central vertical line of the robot as a z axis, taking an intersection point of the z axis and the x-y axis plane as the origin of the robot coordinate system, and taking the positive direction of the y axis as the right front of the robot;
projecting the second three-dimensional coordinate of the entity onto a two-dimensional grid map to obtain a two-dimensional coordinate of the entity; the two-dimensional grid map is superposed with the plane of the bottom surface of the robot;
determining the position relation between the entity and the robot according to the two-dimensional coordinates;
a grid map with 500 × 500 resolution is adopted to represent a real space of 10 × 10 meters, black cells are regarded as obstacles and represent objects lower than the height of the robot, and white cells are adopted to obtain available obstacle data; by referring to the bone data, the squares of the area occupied by the bone are always considered white.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201711384835.1A CN108256430B (en) | 2017-12-20 | 2017-12-20 | Obstacle information acquisition method and device and robot |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201711384835.1A CN108256430B (en) | 2017-12-20 | 2017-12-20 | Obstacle information acquisition method and device and robot |
Publications (2)
Publication Number | Publication Date |
---|---|
CN108256430A CN108256430A (en) | 2018-07-06 |
CN108256430B true CN108256430B (en) | 2021-01-29 |
Family
ID=62722671
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201711384835.1A Active CN108256430B (en) | 2017-12-20 | 2017-12-20 | Obstacle information acquisition method and device and robot |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN108256430B (en) |
Families Citing this family (14)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP6816070B2 (en) | 2018-08-24 | 2021-01-20 | ファナック株式会社 | Interference avoidance device and robot system |
CN109643127B (en) | 2018-11-19 | 2022-05-03 | 深圳阿科伯特机器人有限公司 | Map construction, positioning, navigation and control method and system, and mobile robot |
CN111383321A (en) * | 2018-12-28 | 2020-07-07 | 沈阳新松机器人自动化股份有限公司 | Three-dimensional modeling method and system based on 3D vision sensor |
CN109828582B (en) * | 2019-02-28 | 2019-10-29 | 沈阳师范大学 | Based on intelligent carriage paths planning method combined of multi-sensor information and system |
CN110076777B (en) * | 2019-05-05 | 2020-11-27 | 北京云迹科技有限公司 | Goods taking method and device |
CN110202577A (en) * | 2019-06-15 | 2019-09-06 | 青岛中科智保科技有限公司 | A kind of autonomous mobile robot that realizing detection of obstacles and its method |
CN112318496A (en) * | 2019-08-05 | 2021-02-05 | 乐歆机器人(东莞)有限公司 | Depth camera-based visual motion channel construction system and method |
CN112631266A (en) * | 2019-09-20 | 2021-04-09 | 杭州海康机器人技术有限公司 | Method and device for mobile robot to sense obstacle information |
CN113552589A (en) * | 2020-04-01 | 2021-10-26 | 杭州萤石软件有限公司 | Obstacle detection method, robot, and storage medium |
CN113494916B (en) * | 2020-04-01 | 2024-07-02 | 杭州萤石软件有限公司 | Map construction method and multi-legged robot |
CN111870931B (en) * | 2020-06-24 | 2024-11-05 | 合肥安达创展科技股份有限公司 | Human-computer interaction method and system for somatosensory interaction |
CN112051921B (en) * | 2020-07-02 | 2023-06-27 | 杭州易现先进科技有限公司 | AR navigation map generation method, device, computer equipment and readable storage medium |
CN112747746A (en) * | 2020-12-25 | 2021-05-04 | 珠海市一微半导体有限公司 | Point cloud data acquisition method based on single-point TOF, chip and mobile robot |
CN115167418B (en) | 2022-07-04 | 2023-06-27 | 未岚大陆(北京)科技有限公司 | Transfer path generation method, apparatus, electronic device, and computer storage medium |
Citations (7)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US9304001B2 (en) * | 2013-07-03 | 2016-04-05 | Samsung Electronics Co., Ltd | Position recognition methods of autonomous mobile robots |
CN105652873A (en) * | 2016-03-04 | 2016-06-08 | 中山大学 | Mobile robot obstacle avoidance method based on Kinect |
CN105785989A (en) * | 2016-02-24 | 2016-07-20 | 中国科学院自动化研究所 | System for calibrating distributed network camera by use of travelling robot, and correlation methods |
CN106052674A (en) * | 2016-05-20 | 2016-10-26 | 青岛克路德机器人有限公司 | Indoor robot SLAM method and system |
CN106441275A (en) * | 2016-09-23 | 2017-02-22 | 深圳大学 | Method and device for updating planned path of robot |
CN106680832A (en) * | 2016-12-30 | 2017-05-17 | 深圳优地科技有限公司 | Obstacle detection method and device of mobile robot and mobile robot |
CN107179768A (en) * | 2017-05-15 | 2017-09-19 | 上海木爷机器人技术有限公司 | A kind of obstacle recognition method and device |
-
2017
- 2017-12-20 CN CN201711384835.1A patent/CN108256430B/en active Active
Patent Citations (7)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US9304001B2 (en) * | 2013-07-03 | 2016-04-05 | Samsung Electronics Co., Ltd | Position recognition methods of autonomous mobile robots |
CN105785989A (en) * | 2016-02-24 | 2016-07-20 | 中国科学院自动化研究所 | System for calibrating distributed network camera by use of travelling robot, and correlation methods |
CN105652873A (en) * | 2016-03-04 | 2016-06-08 | 中山大学 | Mobile robot obstacle avoidance method based on Kinect |
CN106052674A (en) * | 2016-05-20 | 2016-10-26 | 青岛克路德机器人有限公司 | Indoor robot SLAM method and system |
CN106441275A (en) * | 2016-09-23 | 2017-02-22 | 深圳大学 | Method and device for updating planned path of robot |
CN106680832A (en) * | 2016-12-30 | 2017-05-17 | 深圳优地科技有限公司 | Obstacle detection method and device of mobile robot and mobile robot |
CN107179768A (en) * | 2017-05-15 | 2017-09-19 | 上海木爷机器人技术有限公司 | A kind of obstacle recognition method and device |
Also Published As
Publication number | Publication date |
---|---|
CN108256430A (en) | 2018-07-06 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN108256430B (en) | Obstacle information acquisition method and device and robot | |
CN109564690B (en) | Estimating the size of an enclosed space using a multi-directional camera | |
KR102455845B1 (en) | Robot mapping system and method | |
Maier et al. | Real-time navigation in 3D environments based on depth camera data | |
Li et al. | Navigation simulation of a Mecanum wheel mobile robot based on an improved A* algorithm in Unity3D | |
Huang et al. | A novel multi-planar LIDAR and computer vision calibration procedure using 2D patterns for automated navigation | |
Kuramachi et al. | G-ICP SLAM: An odometry-free 3D mapping system with robust 6DoF pose estimation | |
CN102622762A (en) | Real-time camera tracking using depth maps | |
Gong et al. | Extrinsic calibration of a 3D LIDAR and a camera using a trihedron | |
JP6746050B2 (en) | Calibration device, calibration method, and calibration program | |
Chen et al. | Real-time 3D mobile mapping for the built environment | |
Abanay et al. | A calibration method of 2D LIDAR-Visual sensors embedded on an agricultural robot | |
Garrote et al. | 3D point cloud downsampling for 2D indoor scene modelling in mobile robotics | |
Glas et al. | SNAPCAT-3D: Calibrating networks of 3D range sensors for pedestrian tracking | |
Oßwald et al. | Efficient coverage of 3D environments with humanoid robots using inverse reachability maps | |
Zhang et al. | Effective safety strategy for mobile robots based on laser-visual fusion in home environments | |
Krainin et al. | Manipulator and object tracking for in hand model acquisition | |
CN110202577A (en) | A kind of autonomous mobile robot that realizing detection of obstacles and its method | |
Kawanishi et al. | Parallel line-based structure from motion by using omnidirectional camera in textureless scene | |
Mojtahedzadeh | Robot obstacle avoidance using the Kinect | |
Chen et al. | Low cost and efficient 3D indoor mapping using multiple consumer RGB-D cameras | |
Lee et al. | Autonomous view planning methods for 3D scanning | |
Xu et al. | A flexible 3D point reconstruction with homologous laser point array and monocular vision | |
Lu et al. | 3-D location estimation of underwater circular features by monocular vision | |
Gulzar et al. | See what i mean-probabilistic optimization of robot pointing gestures |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |