[go: up one dir, main page]
More Web Proxy on the site http://driver.im/

CN112223278B - A detection robot following method and system based on depth visual information - Google Patents

A detection robot following method and system based on depth visual information Download PDF

Info

Publication number
CN112223278B
CN112223278B CN202010943502.3A CN202010943502A CN112223278B CN 112223278 B CN112223278 B CN 112223278B CN 202010943502 A CN202010943502 A CN 202010943502A CN 112223278 B CN112223278 B CN 112223278B
Authority
CN
China
Prior art keywords
target
depth
following
robot
information
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202010943502.3A
Other languages
Chinese (zh)
Other versions
CN112223278A (en
Inventor
刘广亮
郑江花
马争光
朱琳
肖永飞
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
New Material Institute of Shandong Academy of Sciences
Original Assignee
New Material Institute of Shandong Academy of Sciences
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by New Material Institute of Shandong Academy of Sciences filed Critical New Material Institute of Shandong Academy of Sciences
Priority to CN202010943502.3A priority Critical patent/CN112223278B/en
Publication of CN112223278A publication Critical patent/CN112223278A/en
Application granted granted Critical
Publication of CN112223278B publication Critical patent/CN112223278B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/50Depth or shape recovery
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B25HAND TOOLS; PORTABLE POWER-DRIVEN TOOLS; MANIPULATORS
    • B25JMANIPULATORS; CHAMBERS PROVIDED WITH MANIPULATION DEVICES
    • B25J9/00Programme-controlled manipulators
    • B25J9/16Programme controls
    • B25J9/1602Programme controls characterised by the control system, structure, architecture
    • B25J9/161Hardware, e.g. neural networks, fuzzy logic, interfaces, processor
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B25HAND TOOLS; PORTABLE POWER-DRIVEN TOOLS; MANIPULATORS
    • B25JMANIPULATORS; CHAMBERS PROVIDED WITH MANIPULATION DEVICES
    • B25J9/00Programme-controlled manipulators
    • B25J9/16Programme controls
    • B25J9/1656Programme controls characterised by programming, planning systems for manipulators
    • B25J9/1661Programme controls characterised by programming, planning systems for manipulators characterised by task planning, object-oriented languages
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B25HAND TOOLS; PORTABLE POWER-DRIVEN TOOLS; MANIPULATORS
    • B25JMANIPULATORS; CHAMBERS PROVIDED WITH MANIPULATION DEVICES
    • B25J9/00Programme-controlled manipulators
    • B25J9/16Programme controls
    • B25J9/1694Programme controls characterised by use of sensors other than normal servo-feedback from position, speed or acceleration sensors, perception control, multi-sensor controlled systems, sensor fusion
    • B25J9/1697Vision controlled systems
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F17/00Digital computing or data processing equipment or methods, specially adapted for specific functions
    • G06F17/10Complex mathematical operations
    • G06F17/16Matrix or vector computation, e.g. matrix-matrix or matrix-vector multiplication, matrix factorization
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/70Determining position or orientation of objects or cameras
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/90Determination of colour characteristics

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Mathematical Physics (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Robotics (AREA)
  • Mechanical Engineering (AREA)
  • Software Systems (AREA)
  • Pure & Applied Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • Automation & Control Theory (AREA)
  • Mathematical Optimization (AREA)
  • Computational Mathematics (AREA)
  • Mathematical Analysis (AREA)
  • Computing Systems (AREA)
  • Artificial Intelligence (AREA)
  • Evolutionary Computation (AREA)
  • Algebra (AREA)
  • Fuzzy Systems (AREA)
  • Databases & Information Systems (AREA)
  • General Engineering & Computer Science (AREA)
  • Image Analysis (AREA)
  • Image Processing (AREA)
  • Control Of Position, Course, Altitude, Or Attitude Of Moving Bodies (AREA)

Abstract

本公开涉及一种基于深度视觉信息的探测机器人跟随方法及系统,属于移动探测机器人应用技术领域,该方法包括如下步骤:获取深度图像信息、RGB图像信息以及二者的同步信息;显示获取的RGB图像,以RGB图像为基准,选择跟随目标并标记跟随目标矩形区域;选取跟随目标后,通过KCF跟踪算法进行实时目标位置跟踪;根据KCF算法实时跟踪RGB图像矩形位置,在深度图像中计算跟随目标的深度位置信息,从而确定探测机器人相对于跟随目标的距离和方位;在步骤4中计算出跟随目标的相对距离和方位后,发布驱动探测机器人移动的运动控制指令,探测机器人根据运动控制指令进行目标的跟随。本公开的跟随系统及方法实时性好,性能稳定、可以灵活方便地准确跟随指定目标。

Figure 202010943502

The present disclosure relates to a detection robot following method and system based on depth visual information, belonging to the technical field of mobile detection robot applications. The method includes the following steps: acquiring depth image information, RGB image information and synchronization information of the two; displaying the acquired RGB Image, based on the RGB image, select the following target and mark the rectangular area of the following target; after selecting the following target, use the KCF tracking algorithm to track the real-time target position; according to the KCF algorithm, track the rectangular position of the RGB image in real time, and calculate the following target in the depth image. The depth position information of the detection robot can be determined to determine the distance and orientation of the detection robot relative to the following target; after calculating the relative distance and orientation of the following target in step 4, a motion control instruction to drive the detection robot to move is issued, and the detection robot is carried out according to the motion control instruction. target follow. The following system and method of the present disclosure have good real-time performance, stable performance, and can accurately follow a designated target flexibly and conveniently.

Figure 202010943502

Description

Detection robot following method and system based on depth visual information
Technical Field
The disclosure belongs to the technical field of mobile detection robot application, and particularly relates to a detection robot following method and system based on depth visual information.
Background
The statements herein merely provide background related to the present disclosure and may not necessarily constitute prior art.
The detection robot belongs to one of special robots, has sensing, decision-making and movement capabilities, relates to multiple professional technologies such as artificial intelligence, automatic control, information processing, image processing and mode recognition, and spans multiple subjects such as computers, automation, communication, machinery and electronics, and embodies the latest development level of information technology and artificial intelligence technology.
In recent years, with the development of computer technology and artificial intelligence technology, exploration robots have been widely used not only in industrial manufacturing, but also in military guidance, civilian use, marine exploration, moon/mars exploration, and the like. Research on the probe robot has become a hot problem. The detection robot has an increasing role in the production of modern human society, and has penetrated into many important fields. The robot is used for replacing a person to go to a high-risk and toxic severe environment for on-site investigation and detection, such as an underground roadway detection robot, a fire detection robot, a dangerous chemical inspection robot, an underwater detection robot and the like.
One important task of the detection robot is to perform target detection and tracking following. Tracking following of visual targets is also one of the most thermally important hotspots of current machine vision research. The current major target following techniques include image processing-based following techniques, infrared-based following techniques, and ultrasonic-based following techniques. The following technology based on the visual image has the characteristics of visual tracking and following, remote control selection or replacement of a tracking target and the like, and is the most main robot following method researched at present.
However, the robot following method based on vision is wide in technical field, high in real-time requirement, complex in algorithm and difficult to achieve.
Disclosure of Invention
Aiming at the technical problems in the prior art, the present disclosure provides a detection robot following system and method based on depth visual information.
At least one embodiment of the present disclosure provides a detection robot following method based on depth visual information, including the following steps:
step 1: acquiring depth image information, RGB image information and synchronous information of the depth image information and the RGB image information;
step 2: displaying the acquired RGB image, selecting a following target and marking a following target rectangular area by taking the RGB image as a reference;
and step 3: after the following target is selected, carrying out real-time target position tracking through a KCF tracking algorithm;
and 4, step 4: tracking the rectangular position of the RGB image in real time according to a KCF algorithm, and calculating the depth position information of the following target in the depth image, thereby determining the distance and the direction of the detection robot relative to the following target;
and 5: and 4, after the relative distance and the direction of the following target are calculated in the step 4, issuing a motion control instruction for driving the detection robot to move, and carrying out target following by the detection robot according to the motion control instruction.
Further, the specific implementation of acquiring the depth image information and the RGB image information in step 1 includes the following sub-steps:
step 1.1: starting an ROS node for collecting and releasing the depth camera;
step 1.2; subscribing and acquiring depth image information and an RGB image information packet issued by an ROS node;
step 1.3: carrying out synchronous processing on the depth image information and the RGB image information;
step 1.4; the RGB image information is displayed.
Further, selecting the following target specific implementation in step 2 comprises the sub-steps of:
step 2.1, determining a target to be followed in the obtained RGB image;
2.2, selecting a rectangular area according to the position of the following target in the RGB image;
and 2.3, drawing a rectangular frame containing the following target as an explicit mark of the target, wherein the following target is equivalent to follow an image area in the rectangular frame.
Further, the step 3 is specifically divided into the following steps:
step 3.1, taking the rectangular region of the target RGB image obtained in the step 2 as a template, and initializing a KCF algorithm;
step 3.2, calculating a tracking rectangular area in the RGB image frame after the target rectangular area is selected through a KCF algorithm;
and 3.3, displaying the KCF tracking rectangular area in real time.
Further, the step 4 of calculating the depth position information of the following target in the depth image is specifically divided into the following sub-steps:
step 4.1, tracking a rectangular tracking area in the RGB image in real time according to a KCF algorithm; acquiring the rectangular pixel coordinate of the following target as the equivalent rectangular pixel coordinate of the following target by using the rectangular area;
step 4.2, calculating the depth centroid of the rectangular region in the depth image according to the target equivalent rectangular pixel coordinates, namely the average value of the depth information of the rectangular region, and taking the depth centroid as the target depth information;
and 4.3, taking the relative position of the target depth position information and the center position of the target rectangular region in the RGB image as the distance and the direction of the robot relative to the following target.
Further, the step 5 is specifically divided into the following substeps:
step 5.1: calculating the advancing speed and the deflection angle of the robot according to the preset safety distance between the detection robot and the following target, the center position of the visual field and the distance and the direction of the current following target relative to the detection robot;
step 5.2: issuing, by the ROS system, a robot movement instruction including a robot forward speed and a yaw angle.
Further, in the step 5, the detection robot receives the issued robot motion command, including the robot forward speed and the deflection angle; calculating the rotating speeds of two wheels of the robot through a differential motion model of the two wheels of the robot according to the linear velocity and the angular velocity of the robot and a mechanical structure model of the robot; the robot gives the two-wheeled rotational speed to the driver through CAN communication protocol, and the driver driving motor carries out the target and follows the motion.
At least one embodiment of the present disclosure further provides a detection robot following system based on depth vision information, the system includes a robot moving platform and a remote operation terminal; the robot mobile platform is provided with an image acquisition module and a motion control module; the remote operation end is provided with an image processing module;
the image acquisition device is used for acquiring target depth image information and RGB image information and sending the target depth image information and the RGB image information to an image processing module of the remote operation end; the image processing module receives the sent depth image and RGB image information to display images and determine a tracking target; and according to the target depth image data, calculating the distance between the robot and the target point, and sending a motion control signal to the mobile platform.
Further, the system also comprises an ROS system arranged on the robot moving platform; the ROS system is used for collecting and releasing the depth image and the RGB image; meanwhile, the ROS system is also used for converting the linear velocity and the angular velocity of the robot into two-wheel rotating speeds according to the two-wheel differential motion model and sending the two-wheel rotating speeds to the motion control module for execution; and the dynamic control software module is executed by a driver which transmits the left and right wheel speeds to the robot through CAN communication.
Furthermore, a remote operation module is arranged on the remote operation terminal and used for rotating the following target.
The beneficial effects of this disclosure are as follows:
(1) the robot following method disclosed by the invention has the advantages that the target tracking and the depth information are fused, the target direction is determined, the detection robot target following is realized, the method is good in real-time performance and stable in performance, and the specified target can be flexibly and conveniently followed accurately.
(2) The following method adopts the depth camera to obtain the depth information, adopts the KCF to realize real-time tracking, and has no special requirement on the vehicle-mounted computer, and the conventional standard can meet the real-time requirement.
(3) The following method disclosed by the invention has the advantages that the target average depth information is taken as the position information equivalent to the robot, the accuracy is high, and the running speed is high.
(4) The following system motion control part and the image acquisition part adopt an ROS system, and have the characteristic of distributed calculation, so that the whole system is stable and reliable, and is convenient to maintain and expand.
Drawings
The accompanying drawings, which are included to provide a further understanding of the disclosure, illustrate embodiments of the disclosure and together with the description serve to explain the disclosure and are not to limit the disclosure.
FIG. 1 is a diagram of a detection robot following system hardware system architecture provided in an embodiment of the present disclosure;
FIG. 2 is a diagram of a software system architecture of a tracking system of a detection robot according to an embodiment of the present invention;
FIG. 3 is a flow chart of a following method provided by an embodiment of the present invention;
FIG. 4 is a flow chart of the follower software provided by an embodiment of the present invention;
FIG. 5 is a schematic illustration of differential motion provided by an embodiment of the present invention;
FIG. 6 is a schematic diagram of a target centroid algorithm provided by an embodiment of the present invention;
fig. 7 is a schematic diagram of an ROS distribution structure according to an embodiment of the present invention.
Detailed Description
It should be noted that the following detailed description is exemplary and is intended to provide further explanation of the disclosure. Unless defined otherwise, all technical and scientific terms used herein have the same meaning as commonly understood by one of ordinary skill in the art to which this disclosure belongs.
In the description of the present disclosure, it is to be understood that the terms "upper", "lower", "top", "bottom", "inner", "outer", and the like, indicate orientations or positional relationships based on the orientations or positional relationships shown in the drawings, are only for convenience in describing the present disclosure and simplifying the description, and do not indicate or imply that the referred devices or elements must have a specific orientation, be constructed and operated in a specific orientation, and thus, should not be construed as limiting the present disclosure.
As shown in fig. 1 and fig. 2, a robot following system provided by the embodiment of the present disclosure includes a hardware system and a software system, where the hardware system includes four hardware modules, i.e., a depth camera, a robot moving platform, an onboard computer, and a remote operation terminal, where the depth camera and the onboard computer are both installed on the robot moving platform, the depth camera is mainly used to obtain target depth image information and RGB image information, and the onboard computer is installed with an ROS system and a motion control software module; the remote operation terminal communicates with the vehicle-mounted computer through a wireless network WLAN to complete instruction transmission; the remote operation terminal is provided with an image processing software module for receiving target depth image information and RGB image information transmitted by the depth camera, calculating the distance between the robot and a target point according to target depth image data, and issuing a motion control signal through ROS; the robot mobile platform is used for receiving motion control signals sent by the vehicle-mounted computer and making robot movement response to carry out target following.
Further, the software system in the present embodiment includes an ROS system, a motion control software module, an image processing software module, and a remote operation software module. The ROS system is arranged on the vehicle-mounted computer and used for collecting and releasing the depth image and the RGB image, and meanwhile, the ROS system is also used for converting the linear velocity and the angular velocity of the robot into two-wheel rotating speeds according to the two-wheel differential motion model and sending the two-wheel rotating speeds to the motion control software module for execution; the motion control software module is executed by a driver which transmits the left and right wheel speeds to the detection robot through CAN communication; the image processing software module is used for receiving the depth image and RGB image information sent by the detection robot, displaying the image, tracking and processing the target, appointing the tracked target through the remote operation module, calculating the distance between the robot and the target point according to the target depth image data, and issuing a motion control signal to the detection robot platform through the ROS network system.
Other embodiments of the present disclosure further provide a detection robot following method based on depth visual information, which mainly includes the following six steps:
step 1: acquiring depth image information and RGB image information;
step 2: manually selecting a following target;
and step 3: tracking the following target in real time;
and 4, step 4: calculating the following target depth position information;
and 5: issuing a motion instruction of the driving robot;
step 6: and executing the motion instruction.
The following detailed description of these steps is given in FIG. 3:
step 1: and acquiring image information comprising depth image information, RGB image information and synchronous information of the depth image information and the RGB image information. The method specifically comprises the following four substeps:
step 1.1: in the vehicle-mounted robot, starting an ROS node of a depth camera arranged on the detection robot, acquiring image information through the ROS node, and publishing an image topic;
step 1.2: the ROS node at the remote control end subscribes topics of depth image information and RGB image information;
step 1.3, an image processing software module on the remote control end acquires depth image data and RGB image data from a subscription topic and synchronizes the depth image data and the RGB image data;
and step 1.4, displaying the RGB image information on a remote control end computer screen by the image processing software module.
Step 2: in the process that an operator displays the RGB image on a screen of a remote control end computer, a mark following target is manually selected by a mouse, namely the RGB image is used as a reference mark to follow a target rectangular area. The specific implementation comprises the following three substeps:
step 2.1: an operator determines a target to be followed in an RGB image of a remote control end computer screen;
step 2.2: selecting a rectangular area according to the position of the following target in the image;
step 2.3: a rectangular box containing the target is drawn as an explicit label for the target, following the target is equivalent to following the image area within the rectangular box.
And step 3: after an operator takes a following target at a remote control end, the real-time target position tracking is carried out through a KCF tracking algorithm, and the method specifically comprises the following three substeps:
step 3.1: and (3) taking the rectangular region of the target RGB image obtained in the step (2) as a template, and initializing a KCF tracking algorithm.
Step 3.2: calculating a tracking rectangular area through a KCF algorithm;
step 3.3: and displaying the KCF tracking rectangular area on a computer screen of the remote control end in real time.
And 4, step 4: tracking the rectangular position of the RGB image in real time according to a KCF algorithm, and calculating the depth position information of the following target in the depth image, thereby determining the distance and the direction of the detection robot relative to the following target; the method comprises the following steps of calculating depth position information of a following target in a depth image, and specifically comprises the following three sub-steps:
step 4.1, tracking the rectangular position of the RGB image in real time according to a KCF algorithm, and acquiring the rectangular pixel coordinate of the following target as the equivalent rectangular pixel coordinate of the following target according to the rectangular position;
step 4.2, calculating the depth centroid of the rectangular region in the depth image according to the target equivalent rectangular pixel coordinates, namely the average value of the depth information of the rectangular region, and taking the depth centroid as the target depth information;
and 4.3, taking the relative position of the target depth position information and the center position of the target rectangular area in the RGB image as the distance and the direction of the robot relative to the target.
And 5: after the relative distance and the direction of the following target are calculated in the step 4, a motion control instruction topic for driving the robot to move is issued through an ROS system, and the target is followed, specifically comprising the following two substeps:
step 5.1: according to a preset safety distance between the detection robot and the target, the center position of the visual field and the distance and the direction of the current target relative to the robot, the advancing speed (namely the linear speed of the robot) and the deflection angle (namely the angular speed of the robot) of the robot are calculated;
step 5.2: the control end issues a robot motion instruction topic (cmd _ vel) through ROS, and the topic comprises the advancing speed (namely the linear speed of the robot) and the deflection angle (namely the angular speed of the robot)
Step 6: the method comprises the following steps that motion control software installed on a vehicle-mounted computer subscribes a robot motion instruction topic, the rotating speed of two wheels is calculated through a two-wheel differential model, and the target is followed in real time, and specifically comprises the following three substeps:
step 6.1: the vehicle-mounted computer receives a robot motion instruction (cmd _ vel) issued by an operation terminal through motion control software;
step 6.2: calculating the rotating speeds of two wheels of the robot according to the differential motion model of the two wheels of the robot;
step 6.3: and the robot motion control software module of the vehicle-mounted computer transmits the rotating speeds of the two wheels to a driver of the robot through a CAN communication protocol, and the driver of the robot drives a motor to execute target following motion.
As shown in fig. 4, the embodiment of the present disclosure adopts a software process, which includes the steps of:
step 1: the depth camera node collects a depth image and an RGB image;
step 2: the depth camera node issues a depth image and an RGB image;
and step 3: the control node subscribes the depth image and the RGB image;
and 4, step 4: controlling the display of the node RGB image;
and 5: the operator selects a target;
step 6: tracking the target by a KCF algorithm;
and 7: calculating the position of the target by a centroid algorithm;
and 8: issuing a control node motion instruction;
and step 9: the vehicle-mounted computer node subscribes a motion instruction;
step 10: calculating the differential speed of the two wheels;
step 11: the motion instruction is sent to a motor driver;
as shown in fig. 5, the two-wheel differential motion model provided by the embodiment of the disclosure converts linear velocity and angular velocity in the ROS motion command topic cmd _ vel into two-wheel rotation speed. The global coordinate system { XOY }, the local coordinate system { X ' O ' Y ' }, local coordinate Y ' and the axis of two wheels coincide, point to the left wheel from the right wheel, X ' is to the right front of the robot. The center of mass C of the robot is assumed to be located at the middle point of the axes of the two driving wheels and is coincident with the origin O' of the coordinate system. The local coordinate system corresponds to the global coordinate system, and has a rotation angle θ, which is the direction angle of the robot.
Mapping of local coordinate system to global coordinate system orthogonal rotation matrix:
Figure GDA0002832154830000101
the length of the axes of the two driving wheels is l, the radiuses of the left driving wheel and the right driving wheel are both r, and the pose vector of the robot is P ═ x, y, theta)T. The positive kinematic equation of the two-wheel differential drive mobile robot obtained according to the rigid body mechanics knowledge is as follows:
Figure GDA0002832154830000102
Figure GDA0002832154830000103
wherein v is the linear velocity at the robot centroid; w is the steering angular velocity of the robot; v. ofr、vlThe linear speeds of the left and right driving wheels of the robot are respectively.
The linear velocity of the central point of the robot is as follows:
Figure GDA0002832154830000104
the angular velocity is:
Figure GDA0002832154830000111
the conversion formula is:
vr=v+wl
vl=v-wl
when the motor is driven, the linear velocity v of the motor is drivenr、vlIt needs to be converted into the rotating speed value of the motor. Let the rotational speed of the motor be n, in units of more or less revolutions per minute, i.e. rpm, since
vr=nr*60*2πr
vl=nl*60*2πr
Then there are:
nr=vr/(60*2πr)
nl=vl/(60*2πr)
the rotating speed values of the left wheel and the right wheel of the differential motion, namely the rotating speed values of the left wheel and the right wheel which are given to the motor driver by the motion command.
As shown in fig. 6, the step of calculating the centroid of the depth image in the target rectangular frame as the target position reference information according to the embodiment of the present disclosure includes:
step 1: calculating each corresponding horizontal and vertical radian;
assuming that the horizontal viewing angle α, the vertical viewing angle β, the image width and height are image width and image height, in fig. 6,
Figure GDA0002832154830000112
the length is the image _ width,
Figure GDA0002832154830000113
is image height, then each pixel has horizontal and vertical radians of:
x_radians_per_pixel=a*π/180/image_width
y_radians_per_pixel=β*π/180/image_height
where pi is the circumferential ratio.
Step 2: the horizontal (X) and vertical (Y) ratios of each pixel are calculated.
The horizontal and vertical ratio of the image pixel P, is the sine value:
sin_pixel_x[px]=sin((px-image_width/2)*x_radians_per_pixel)
sin_pixel_y[py]=sin((py-image_height/2)*y_radians_per_pixel)
wherein p isx、pyRespectively, x and y pixel coordinates of the pixel P with the point A as the origin,
and step 3: respectively calculating the depth proportion values x _ val and y _ val of each pixel in the target rectangular area;
assuming that a depth image pixel P has a depth value, then:
x_val=sin_pixel_x[px]*depth
y_val=sin_pixel_y[py]*depth
and 4, step 4: respectively calculating the depth ratio values of all pixels in the x direction and the y direction in the target rectangular region, namely x _ sum and y _ sum;
x_sum+=x_val
y_sum+=y_val
and 5: calculating the average value x _ avg, y _ avg of the depth proportion of all pixels in the x direction and the y direction in the target rectangular area, wherein the average value y _ avg is the centroid;
x_avg=x_sum/n
y_avg=y_sum/n
wherein n is the number of all pixels in the target rectangular region.
Step 6: calculating the moving speed linear _ speed and rotation _ speed of the robot according to the relative position of the target area, wherein the minimum depth value z and the depth ratio average value x _ avg of the target rectangular area are used for the relative position; linear _ speed ═ (min (z) -coarse _ z) × z _ scale
rotation_speed=x_avg*x_scale
And min (z) the minimum depth value in the target rectangular area, the goal _ z is the set safe distance between the robot and the target, and the z _ scale and the x _ scale are speed ratio values.
Finally, it should be noted that, as shown in fig. 7, the embodiment of the present disclosure uses a network communication architecture based on an ROS system, and the ROS nodes include a depth camera node, a manipulation end node, and a robot mobile platform node. The main ROS topics subscribed and published include a depth image topic (/ image _ depth), an RGB image topic (/ image _ color), a camera information topic (/ camera _ info), and a robot movement instruction topic (/ cmd _ vel).
Finally, the above embodiments are only intended to illustrate the technical solutions of the present disclosure and not to limit, although the present disclosure has been described in detail with reference to the preferred embodiments, it should be understood by those skilled in the art that modifications or equivalent substitutions may be made to the technical solutions of the present disclosure without departing from the spirit and scope of the technical solutions, and all of them should be covered in the claims of the present disclosure.
Although the present disclosure has been described with reference to specific embodiments, it should be understood that the scope of the present disclosure is not limited thereto, and those skilled in the art will appreciate that various modifications and changes can be made without departing from the spirit and scope of the present disclosure.

Claims (8)

1. A detection robot following method based on depth visual information is characterized in that: the method comprises the following steps:
step 1: acquiring depth image information, RGB image information and synchronous information of the depth image information and the RGB image information;
step 2: displaying the acquired RGB image, selecting a following target and marking a following target rectangular area by taking the RGB image as a reference;
and step 3: after the following target is selected, carrying out real-time target position tracking through a KCF tracking algorithm;
the step 3 is specifically divided into the following steps:
step 3.1, taking the target rectangular area obtained in the step 2 as a template, and initializing a KCF algorithm;
step 3.2, calculating a tracking rectangular area in the RGB image frame after the target rectangular area is selected through a KCF algorithm;
3.3, displaying a KCF tracking rectangular area in real time;
and 4, step 4: tracking a KCF tracking rectangular area in the RGB image in real time according to a KCF algorithm, and calculating the depth position information of the following target in the depth image so as to determine the distance and the direction of the detection robot relative to the following target;
in the step 4, the depth position information of the following target is calculated in the depth image, and the method specifically comprises the following substeps:
step 4.1, tracking a KFC tracking rectangular area in the RGB image in real time according to a KCF algorithm; acquiring the rectangular pixel coordinate of the following target as the equivalent rectangular pixel coordinate of the following target by using the rectangular area;
step 4.2, calculating the depth centroid of the rectangular region in the depth image according to the target equivalent rectangular pixel coordinates, namely the average value of the depth information of the rectangular region, and taking the depth centroid as the target depth information;
the method comprises the following steps of calculating the depth centroid of a rectangular region in a depth image according to the coordinates of target equivalent rectangular pixels:
step 4.2.1: calculating the corresponding horizontal and vertical radians of each pixel;
step 4.2.2: calculating the depth proportion of each pixel in the horizontal direction, namely the X direction, and in the vertical direction, namely the Y direction; the depth proportion is the sine value of the radian in the horizontal direction and the radian in the vertical direction respectively;
step 4.2.3: respectively calculating depth proportion values of each pixel in the horizontal direction and the vertical direction in the target rectangular region;
step 4.2.4: respectively calculating the depth proportion value summation of all pixels in the horizontal direction and the vertical direction in the target rectangular region;
step 4.2.5: calculating the average value of the depth proportion of all pixels in the horizontal direction and the vertical direction in the target rectangular region to obtain the depth centroid of the rectangular region;
4.3, calculating the distance and the direction of the robot relative to the following target according to the target depth position information and the relative position of the center position of the target rectangular region in the RGB image;
and 5: and 4, after the relative distance and the direction of the following target are calculated in the step 4, issuing a motion control instruction for driving the detection robot to move, and carrying out target following by the detection robot according to the motion control instruction.
2. The detection robot following method based on the depth vision information as claimed in claim 1, wherein: the specific implementation of acquiring the depth image information and the RGB image information in step 1 includes the following substeps:
step 1.1: starting an ROS node for collecting and releasing the depth camera;
step 1.2; subscribing and acquiring depth image information and an RGB image information packet issued by an ROS node;
step 1.3: carrying out synchronous processing on the depth image information and the RGB image information;
step 1.4; the RGB image information is displayed.
3. The detection robot following method based on the depth vision information as claimed in claim 1, wherein: the selection of the following target implementation in step 2 comprises the following sub-steps:
step 2.1, determining a target to be followed in the obtained RGB image;
2.2, selecting a rectangular area according to the position of the following target in the RGB image;
and 2.3, drawing a rectangular frame containing the following target as an explicit mark of the target, wherein the following target is equivalent to follow an image area in the rectangular frame.
4. The detection robot following method based on the depth vision information as claimed in claim 1, wherein: the step 5 is specifically divided into the following substeps:
step 5.1: calculating the advancing speed and the deflection angle of the robot according to the preset safety distance between the detection robot and the following target, the center position of the visual field and the distance and the direction of the current following target relative to the detection robot;
step 5.2: issuing, by the ROS system, a robot movement instruction including a robot forward speed and a yaw angle.
5. The detection robot following method based on the depth vision information as claimed in claim 1, wherein: in the step 5, the detection robot receives the issued robot motion command, including the advancing speed and the deflection angle of the robot; calculating the rotating speeds of two wheels of the robot through a differential motion model of the two wheels of the robot according to the linear velocity and the angular velocity of the robot and a mechanical structure model of the robot; the robot gives the two-wheeled rotational speed to the driver through CAN communication protocol, and the driver driving motor carries out the target and follows the motion.
6. A detection robot following system based on depth visual information is characterized in that: the robot comprises a robot moving platform and a remote operation end; the robot mobile platform is provided with an image acquisition module and a motion control module; the remote operation end is provided with an image processing module;
the image acquisition module is used for acquiring target depth image information and RGB image information and sending the target depth image information and the RGB image information to the image processing module of the remote operation terminal; the image processing module receives the sent depth image and RGB image information to display images and determine a tracking target; according to the target depth image data, calculating the distance between the robot and a target point, and sending a motion control signal to the mobile platform; the method for calculating the distance between the robot and the target point according to the target depth image data specifically comprises the following substeps:
step 4.1, tracking a KFC tracking rectangular area in the RGB image in real time according to a KCF algorithm; acquiring the rectangular pixel coordinate of the following target as the equivalent rectangular pixel coordinate of the following target by using the rectangular area;
step 4.2, calculating the depth centroid of the rectangular region in the depth image according to the target equivalent rectangular pixel coordinates, namely the average value of the depth information of the rectangular region, and taking the depth centroid as the target depth information;
the method comprises the following steps of calculating the depth centroid of a rectangular region in a depth image according to the coordinates of target equivalent rectangular pixels:
step 4.2.1: calculating the corresponding horizontal and vertical radians of each pixel;
step 4.2.2: calculating the depth proportion of each pixel in the horizontal direction, namely the X direction, and in the vertical direction, namely the Y direction; the depth proportion is the sine value of the radian in the horizontal direction and the radian in the vertical direction respectively;
step 4.2.3: respectively calculating depth proportion values of each pixel in the horizontal direction and the vertical direction in the target rectangular region;
step 4.2.4: respectively calculating the depth proportion value summation of all pixels in the horizontal direction and the vertical direction in the target rectangular region;
step 4.2.5: calculating the average value of the depth proportion of all pixels in the horizontal direction and the vertical direction in the target rectangular region to obtain the depth centroid of the rectangular region;
and 4.3, calculating the distance and the direction of the robot relative to the following target according to the target depth position information and the relative position of the center position of the target rectangular region in the RGB image.
7. The detection robot following system based on depth vision information as claimed in claim 6, wherein: the robot further comprises an ROS system arranged on the robot moving platform; the ROS system is used for collecting and releasing the depth image and the RGB image; meanwhile, the ROS system is also used for converting the linear velocity and the angular velocity of the robot into two-wheel rotating speeds according to the two-wheel differential motion model and sending the two-wheel rotating speeds to the motion control module for execution; the motion control module is used for transmitting the left and right wheel speeds to a driver of the robot through CAN communication.
8. The detection robot following system based on depth vision information as claimed in claim 6, wherein: and the remote operation end is provided with a remote operation module for rotating the following target.
CN202010943502.3A 2020-09-09 2020-09-09 A detection robot following method and system based on depth visual information Active CN112223278B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010943502.3A CN112223278B (en) 2020-09-09 2020-09-09 A detection robot following method and system based on depth visual information

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010943502.3A CN112223278B (en) 2020-09-09 2020-09-09 A detection robot following method and system based on depth visual information

Publications (2)

Publication Number Publication Date
CN112223278A CN112223278A (en) 2021-01-15
CN112223278B true CN112223278B (en) 2021-12-21

Family

ID=74116217

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010943502.3A Active CN112223278B (en) 2020-09-09 2020-09-09 A detection robot following method and system based on depth visual information

Country Status (1)

Country Link
CN (1) CN112223278B (en)

Families Citing this family (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2022151507A1 (en) * 2021-01-18 2022-07-21 深圳市大疆创新科技有限公司 Movable platform and method and apparatus for controlling same, and machine-readable storage medium
CN112894810A (en) * 2021-01-19 2021-06-04 四川阿泰因机器人智能装备有限公司 KCF algorithm-based mobile robot target loss prevention following method
CN113065392A (en) * 2021-02-24 2021-07-02 苏州盈科电子有限公司 Robot tracking method and device
CN114659450B (en) * 2022-03-25 2023-11-14 北京小米机器人技术有限公司 Robot following method, device, robot and storage medium
CN115344051B (en) * 2022-10-17 2023-01-24 广州市保伦电子有限公司 Visual following method and device of intelligent following trolley
CN118963370B (en) * 2024-10-14 2025-02-07 天津开发区中环系统电子工程股份有限公司 Logistics inspection method and system based on machine vision

Family Cites Families (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR101635896B1 (en) * 2014-10-15 2016-07-20 한국과학기술연구원 Device and method for tracking people based depth information
CN109145708B (en) * 2018-06-22 2020-07-24 南京大学 Pedestrian flow statistical method based on RGB and D information fusion
CN109240297A (en) * 2018-09-26 2019-01-18 深算科技(上海)有限公司 A kind of independent navigation robot that view-based access control model follows
CN109352654A (en) * 2018-11-23 2019-02-19 武汉科技大学 A ROS-based intelligent robot following system and method
CN109917818B (en) * 2019-01-31 2021-08-13 天津大学 Collaborative search and containment method based on ground robot
CN110706252B (en) * 2019-09-09 2020-10-23 西安理工大学 Robot Kernel Correlation Filter Tracking Algorithm Guided by Motion Model

Also Published As

Publication number Publication date
CN112223278A (en) 2021-01-15

Similar Documents

Publication Publication Date Title
CN112223278B (en) A detection robot following method and system based on depth visual information
WO2020221311A1 (en) Wearable device-based mobile robot control system and control method
CN114815654B (en) Unmanned vehicle control-oriented digital twin system and construction method thereof
CN109079799B (en) Robot perception control system and control method based on bionics
CN104808675B (en) Body-sensing flight control system and terminal device based on intelligent terminal
US8725273B2 (en) Situational awareness for teleoperation of a remote vehicle
CN109917786A (en) A robot perception system and system operation method for complex environment operations
Li et al. Localization and navigation for indoor mobile robot based on ROS
CN106354161A (en) Robot motion path planning method
CN113190020A (en) Mobile robot queue system and path planning and following method
CN112947407A (en) Multi-agent finite-time formation path tracking control method and system
CN111590567B (en) Space manipulator teleoperation planning method based on Omega handle
CN113858217B (en) Multi-robot interaction three-dimensional visual pose perception method and system
Angelopoulos et al. Drone brush: Mixed reality drone path planning
CN108762253A (en) A kind of man-machine approach to formation control being applied to for people's navigation system
CN113532431A (en) Visual inertia SLAM method for power inspection and operation
Zhu et al. Indoor localization method of mobile educational robot based on visual sensor
Zhang et al. High-precision calibration of camera and imu on manipulator for bio-inspired robotic system
Sidaoui et al. Collaborative human augmented SLAM
CN117523575A (en) Intelligent instrument reading method and system based on inspection robot
CN111913499A (en) Pan-tilt control method based on monocular vision SLAM and depth uncertainty analysis
CN216697069U (en) Mobile Robot Control System Based on ROS2
CN109655041A (en) A kind of underwater multi-angle observation aircraft and its control system
CN115359222A (en) Unmanned interaction control method and system based on augmented reality
CN114935340A (en) Indoor navigation robot, control system and method

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant
EE01 Entry into force of recordation of patent licensing contract
EE01 Entry into force of recordation of patent licensing contract

Application publication date: 20210115

Assignee: SHANDONG GAITE AVIATION TECHNOLOGY Co.,Ltd.

Assignor: INSTITUTE OF AUTOMATION, SHANDONG ACADEMY OF SCIENCES

Contract record no.: X2024980033881

Denomination of invention: A detection robot following method and system based on deep vision information

Granted publication date: 20211221

License type: Common License

Record date: 20241211