[go: up one dir, main page]
More Web Proxy on the site http://driver.im/

CN111968157B - Visual positioning system and method applied to high-intelligent robot - Google Patents

Visual positioning system and method applied to high-intelligent robot Download PDF

Info

Publication number
CN111968157B
CN111968157B CN202010822549.4A CN202010822549A CN111968157B CN 111968157 B CN111968157 B CN 111968157B CN 202010822549 A CN202010822549 A CN 202010822549A CN 111968157 B CN111968157 B CN 111968157B
Authority
CN
China
Prior art keywords
intelligent robot
unit
information
key frame
visual
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202010822549.4A
Other languages
Chinese (zh)
Other versions
CN111968157A (en
Inventor
史超
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shenzhen Guoxin Taifu Technology Co ltd
Original Assignee
Shenzhen Guoxin Taifu Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shenzhen Guoxin Taifu Technology Co ltd filed Critical Shenzhen Guoxin Taifu Technology Co ltd
Priority to CN202010822549.4A priority Critical patent/CN111968157B/en
Publication of CN111968157A publication Critical patent/CN111968157A/en
Application granted granted Critical
Publication of CN111968157B publication Critical patent/CN111968157B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/20Analysis of motion
    • G06T7/246Analysis of motion using feature-based methods, e.g. the tracking of corners or segments
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/20Analysis of motion
    • G06T7/277Analysis of motion involving stochastic approaches, e.g. using Kalman filters
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/70Determining position or orientation of objects or cameras
    • G06T7/73Determining position or orientation of objects or cameras using feature-based methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • G06V10/44Local feature extraction by analysis of parts of the pattern, e.g. by detecting edges, contours, loops, corners, strokes or intersections; Connectivity analysis, e.g. of connected components
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/74Image or video pattern matching; Proximity measures in feature spaces
    • G06V10/75Organisation of the matching processes, e.g. simultaneous or sequential comparisons of image or video features; Coarse-fine approaches, e.g. multi-scale approaches; using context analysis; Selection of dictionaries
    • G06V10/757Matching configurations of points or features
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10016Video; Image sequence

Landscapes

  • Engineering & Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Multimedia (AREA)
  • Computing Systems (AREA)
  • Artificial Intelligence (AREA)
  • Health & Medical Sciences (AREA)
  • Databases & Information Systems (AREA)
  • Evolutionary Computation (AREA)
  • General Health & Medical Sciences (AREA)
  • Medical Informatics (AREA)
  • Software Systems (AREA)
  • Image Analysis (AREA)
  • Length Measuring Devices By Optical Means (AREA)

Abstract

The invention discloses a visual positioning system and a visual positioning method applied to a high-intelligent robot, wherein a positioning module applied to the high-intelligent robot mainly comprises an environment sensing device, a detection unit, a visual mileage calculation unit, a judgment unit, a filtering unit, an adjustment unit and a stereoscopic vision system; collecting surrounding environment information, detecting angular points of the surrounding environment information, matching the obtained corresponding image features, and outputting the position and posture information of the high-intelligent robot; and judging key frames of two continuous frames of pictures: if the coordinate is the key frame, the detected characteristic point coordinates used for the front intersection are used as observation data, and the optimal camera parameters and world point coordinates are obtained; and if the key frame is not the key frame, a new round of calculation is performed again. Through the technical scheme, the robot can continuously position the position of the robot, and can predict future motion tracks according to historical motion tracks of the robot, so that the positioning accuracy of the robot is effectively improved, and the intelligent level of the robot is improved.

Description

Visual positioning system and method applied to high-intelligent robot
Technical Field
The invention relates to the technical field of high-intelligent robot positioning, in particular to a visual positioning system and method applied to a high-intelligent robot.
Background
With the progress and development of modern society science and technology, robots are applied to more and more scenes, such as resource exploration and development, disaster relief and danger elimination, medical services and the like. Precise positioning of the robot's location is often required when the robot performs a task.
At present, the robot soybean adopts a GPS satellite positioning system to realize the self position determination, although satellite positioning has higher positioning precision according to the continuous development of the technical level, under certain special terrain environments, GPS satellite positioning is easy to realize effective accurate positioning because of factors such as satellite signal loss/shielding and the like, so that a supplementary positioning measure is needed to compensate the accurate robot positioning when satellite positioning fails, and the operation safety of the robot is further ensured.
Disclosure of Invention
Aiming at the problems in the prior art, the visual positioning system of the high intelligent robot is provided, and the specific technical scheme is as follows:
the utility model provides a visual positioning system of high intelligent robot which characterized in that, visual positioning system specifically includes:
the environment sensing devices are used for continuously sensing the space environment around the high intelligent robot and outputting corresponding environment image information;
The detection unit is respectively connected with the plurality of environment sensing devices and is used for detecting angular points of the environment images and outputting corresponding image characteristic information;
The visual mileage calculation unit is connected with the detection unit and used for analyzing and processing according to the related image characteristics and outputting the position information and the posture information of the high-intelligent robot;
The judging unit is connected with the visual mileage calculating unit and is used for judging key frames of each frame of image matched by the characteristic points, enabling the high-intelligent robot to carry out robust gesture and respectively outputting gesture information and a judging result;
The filtering unit is respectively connected with the judging unit and the visual mileage calculating unit and is used for calculating and analyzing the target motion state of the high-intelligent robot according to the judging result and outputting the estimated position state information of the high-intelligent robot when the judging result is not a key frame;
The adjusting unit is connected with the judging unit and the filtering unit and is used for carrying out local beam adjustment method calculation processing on the feature point coordinates which are detected by the environment sensing device and used for the front intersection as observation data according to the judging result when the judging result is a key frame and outputting camera parameters and world point coordinates;
and the stereoscopic vision system is respectively connected with the adjusting unit and the visual mileage calculating unit and is used for calculating and processing the camera parameters and world point coordinates output by the adjusting unit and outputting new image characteristics.
Preferably, the key frame judgment is to compare two continuous images matched by the feature points, if the feature points are the same, the judgment result is not a key frame, and if the feature points are different, the judgment result is a key frame.
Preferably, the environment sensing device comprises two wide-baseline stereo cameras and two narrow-baseline stereo cameras, and the two wide-baseline stereo cameras and the two narrow-baseline stereo cameras are respectively positioned at the left side and the right side of the front side face of the head of the high-intelligent robot.
Preferably, the environment sensing device acquires the space environment information and then respectively generates image information of different camera visual angles at the left side and the right side of the narrow-baseline stereo camera and image information of different camera visual angles at the left side and the right side of the wide-baseline stereo camera.
Preferably, the corner pixels detected by the corner detection unit are sub-pixel levels.
Preferably, the filtering unit is an extended kalman filtering unit, and the result is sent to the visual mileage calculation unit through predicting and judging the motion gesture of the high-intelligent robot.
Preferably, the extended kalman filter unit predicts the pose of the highly intelligent robot through 6 degrees of freedom.
Preferably, the method for visual positioning according to any one of claims 1 to 7 comprises the following steps:
step S1, continuously sensing the space environment around the high-intelligent robot, and outputting corresponding environment image information;
Step S2, performing corner detection on the environment image information to obtain corresponding image features and outputting image feature information;
Step S3, analyzing and matching characteristic points of the image characteristics, and outputting the position and posture information of the high-intelligent robot;
step S4, judging a key frame of each frame of picture matched with the characteristic points, if the key frame is the key frame, enabling the high-intelligent robot to enter a robust gesture, and turning to step S5; if the key frame is not found, the high intelligent robot enters a robust gesture, and the step S7 is carried out;
step S5, obtaining optimal camera parameters and world point coordinates by a local beam adjustment method by taking the feature point coordinates for front intersection detected by the environment sensing device as observation data;
step S6, processing the optimal camera parameters and world point coordinates obtained by the local beam adjustment method, outputting new image features, and transmitting the acquired new features back to the vision mileage calculation unit for new calculation;
And S7, returning to the step S3 after the high intelligent robot enters the robust gesture and transmitting gesture information to the filtering unit.
Preferably, the visual range calculating unit for analyzing and matching the image feature points in step S3 matches the image feature points of the left and right camera views according to the left and right camera views.
Preferably, the visual positioning method of a highly intelligent robot according to claim 8, wherein when the key frame judgment result in the step S4 is not a key frame, the highly intelligent robot enters a robust pose, and transmits the pose information to an extended kalman filter unit.
The technical scheme has the following advantages or beneficial effects:
In the technical scheme, the environment sensing device collects surrounding environment information and detects angular points of the environment sensing device through the detection unit, the obtained corresponding image features are matched through the visual mileage calculation unit and position and posture information of the high intelligent robot are output, the robot enters a robust posture after keyframe judgment is carried out on two continuous frames of pictures through the judgment unit, if the judgment result is a keyframe, the coordinates of feature points for front intersection detected by the environment sensing device are used as observation data, the optimal camera parameters and world coordinates are obtained through the adjustment unit, and if the judgment result is not the keyframe, the visual mileage calculation unit carries out new calculation again. The robot can continuously position the robot, and the future motion trail is predicted according to the historical motion trail of the robot through the extended Kalman filtering unit, so that the running safety of the robot is effectively improved, and the intelligent level of the robot is improved.
Drawings
Fig. 1 is a schematic structural diagram of a visual positioning system and a visual positioning method applied to a highly intelligent robot.
Fig. 2 is a schematic flow chart of a visual positioning method applied to a visual positioning system and a visual positioning method of a highly intelligent robot.
Detailed Description
The following description of the embodiments of the present invention will be made clearly and completely with reference to the accompanying drawings, in which it is apparent that the embodiments described are only some embodiments of the present invention, but not all embodiments. All other embodiments, which can be made by those skilled in the art based on the embodiments of the invention without making any inventive effort, are intended to be within the scope of the invention.
It should be noted that, without conflict, the embodiments of the present invention and features of the embodiments may be combined with each other.
The invention is further described below with reference to the drawings and specific examples, which are not intended to be limiting.
Aiming at the problems in the prior art, the visual positioning system of the high intelligent robot is provided, and the specific technical scheme is as follows:
A vision positioning system for a highly intelligent robot, as shown in fig. 1, comprising:
The environment sensing devices 1 are used for continuously sensing the space environment around the high-intelligent robot and outputting corresponding environment image information;
The detection unit 2 is respectively connected with the plurality of environment sensing devices and is used for detecting angular points of the environment images and outputting corresponding image characteristic information;
a visual mileage calculation unit 3 connected with the detection unit 2, and used for analyzing and processing according to the related image characteristics and outputting the position information and the posture information of the high intelligent robot;
a judging unit 4 connected with the visual mileage calculating unit 3, and configured to perform key frame judgment on each frame of image matched by the feature points, and simultaneously make the high-intelligent robot perform robust gesture, and output gesture information and a judgment result respectively;
The filtering unit 5 is respectively connected with the judging unit 4 and the visual mileage calculating unit 3 and is used for calculating and analyzing the target motion state of the high-intelligent robot according to the judging result and outputting the estimated position state information of the high-intelligent robot when the judging result is not a key frame;
The adjusting unit 6 is connected with the judging unit 4 and the filtering unit 5, and is used for carrying out calculation processing of a local beam adjustment method on the feature point coordinates which are detected by the environment sensing device and used for the front intersection as observation data according to the judging result and when the judging result is a key frame, and outputting camera parameters and world point coordinates;
and the stereoscopic vision system 7 is respectively connected with the adjusting unit 6 and the visual mileage calculating unit 3 and is used for calculating and processing the camera parameters and world point coordinates output by the adjusting unit and outputting new image characteristics.
In a preferred embodiment, the key frame judgment is performed by comparing two continuous images matched by the feature points, if the feature points are the same, the judgment result is not a key frame, and if the feature points are different, the judgment result is a key frame.
As a preferred embodiment, the environment sensing device 1 includes two wide-baseline stereo cameras and two narrow-baseline stereo cameras, which are respectively located at the left and right sides of the front side of the head of the highly intelligent robot.
In a preferred embodiment of the present invention, the environment sensing device 1 includes two wide-baseline stereo cameras and two narrow-baseline stereo cameras, wherein the wide-baseline stereo cameras are mainly used for acquiring world environment information around the high-intelligent robot, the narrow-baseline stereo cameras are mainly used for acquiring image information of a manipulator operation part of the high-intelligent robot, and the two stereo cameras are mutually matched and can be simultaneously applied to an operation state and a working state of the high-intelligent robot, so that accuracy of visual positioning is further improved. It should be noted that the environment sensing device 1 is not limited to the wide-baseline stereo camera and the narrow-baseline stereo camera, and includes a double-fisheye camera and a laser radar for performing complete world environment modeling and environment sensing in the practical application process.
As a preferred embodiment, the environment sensing device collects the spatial environment information and then generates image information of different camera angles at the left and right sides of the narrow-baseline stereo camera and image information of different camera angles at the left and right sides of the wide-baseline stereo camera respectively.
As a preferred embodiment, the corner pixels detected by the corner detection unit are sub-pixel level.
As a preferred embodiment, the filtering unit is an extended kalman filter, and the result is sent to the visual mileage calculation unit through predicting and judging the motion gesture of the highly intelligent robot.
As a preferred embodiment, the spankalman filter unit predicts the pose of the highly intelligent robot with 6 degrees of freedom.
As an optimal implementation mode, the visual positioning method of the high intelligent robot comprises the following steps:
step S1, continuously sensing the space environment around the high-intelligent robot, and outputting corresponding environment image information;
Step S2, performing corner detection on the environment image information to obtain corresponding image features and outputting image feature information;
Step S3, analyzing and matching characteristic points of the image characteristics, and outputting the position and posture information of the high-intelligent robot;
step S4, judging a key frame of each frame of picture matched with the characteristic points, if the key frame is the key frame, enabling the high-intelligent robot to enter a robust gesture, and turning to step S5; if the key frame is not found, the high intelligent robot enters a robust gesture, and the step S7 is carried out;
Step S5, obtaining camera parameters and world point coordinates by a local beam adjustment method by taking the feature point coordinates for front intersection detected by the environment sensing device as observation data;
step S6, processing the optimal camera parameters and world point coordinates obtained by the local beam adjustment method, outputting new image features, and transmitting the acquired new features back to the vision mileage calculation unit for new calculation;
And S7, returning to the step S3 after the high intelligent robot enters the robust gesture and transmitting gesture information to the filtering unit.
As a preferred embodiment, the visual range calculating unit for image feature point analysis matching in step S3 performs matching according to the image feature points of the left and right two different camera views.
In a preferred embodiment, when the key frame judgment result in step S4 is not a key frame, the highly intelligent robot enters a robust pose, and transmits the pose information to an extended kalman filter unit.
A specific embodiment is provided to further explain and illustrate the present technical solution:
In the specific embodiment of the invention, the environment sensing device 1 continuously senses the surrounding environment, outputs corresponding environment image information, the detection unit 2 carries out angular point detection on the environment information to obtain corresponding image characteristics, the visual mileage calculation unit 3 carries out characteristic point analysis and matching on the image characteristics and outputs the position and posture information of the robot, the judgment unit 4 carries out key frame judgment on each frame of picture matched by the characteristic points, if the result is a key frame, the high-intelligent robot enters a robust posture, the characteristic point coordinates for front intersection detected by the environment sensing device 1 are used as observation data to obtain camera parameters and world point coordinates through a local beam adjustment method, the optimal camera parameters and the world point coordinates obtained through the local beam adjustment method are processed to output new image characteristics, and the obtained new characteristics are transmitted to the visual mileage calculation unit 3 for new calculation; if the judgment result is not the key frame, the high intelligent robot enters the robust gesture, then the visual mileage calculation unit 3 performs characteristic point analysis and matching on the image characteristics again, and the position and gesture information of the high intelligent robot is output.
To sum up, through this technical scheme, the robot can be sustainable to fix a position self to through expanding Kalman filtering unit according to robot history motion trail to the future motion trail prediction, the effectual security that improves the robot operation has promoted the intelligent level of robot.
The foregoing description is only illustrative of the preferred embodiments of the present invention and is not to be construed as limiting the scope of the invention, and it will be appreciated by those skilled in the art that equivalent substitutions and obvious variations may be made using the description and illustrations of the present invention, and are intended to be included within the scope of the present invention.

Claims (10)

1. The utility model provides a be applied to high intelligent robot's vision positioning system which characterized in that, vision positioning system specifically includes:
The environment sensing device is used for continuously sensing the space environment around the high-intelligent robot and outputting corresponding environment image information;
The detection unit is respectively connected with the plurality of environment sensing devices and is used for detecting angular points of the environment images and outputting corresponding image characteristic information;
the visual mileage calculation unit is connected with the detection unit and used for analyzing and processing according to the image characteristic information and outputting the position information and the posture information of the high-intelligent robot;
The judging unit is connected with the visual mileage calculating unit and is used for judging key frames of each frame of image matched by the characteristic points, enabling the high-intelligent robot to carry out robust gestures and respectively outputting gesture information and a judging result;
The filtering unit is respectively connected with the judging unit and the visual mileage calculating unit and is used for calculating and analyzing the target motion state of the high-intelligent robot according to the judging result and outputting the estimated position state information of the high-intelligent robot when the judging result is not a key frame;
The adjusting unit is connected with the judging unit and the filtering unit and is used for carrying out local beam adjustment method calculation processing on the feature point coordinates which are detected by the environment sensing device and used for the front intersection as observation data according to the judging result when the judging result is a key frame and outputting camera parameters and world point coordinates;
and the stereoscopic vision system is respectively connected with the adjusting unit and the visual mileage calculating unit and is used for calculating and processing the camera parameters and world point coordinates output by the adjusting unit and outputting new image characteristics.
2. The visual positioning method of a highly intelligent robot according to claim 1, wherein the key frame judgment is performed by comparing two continuous images matched by the feature points, if the feature points are the same, the judgment result is not a key frame, and if the feature points are different, the judgment result is a key frame.
3. The visual positioning method of a highly intelligent robot according to claim 1, wherein the environment sensing device comprises two wide-baseline stereo cameras and two narrow-baseline stereo cameras, and the two wide-baseline stereo cameras and the two narrow-baseline stereo cameras are respectively positioned on the left side and the right side of the front side of the head of the highly intelligent robot.
4. The visual positioning method of the highly intelligent robot according to claim 3, wherein the environment sensing device acquires the spatial environment information and then generates image information of different camera angles at the left and right sides of the narrow-baseline stereo camera and image information of different camera angles at the left and right sides of the wide-baseline stereo camera respectively.
5. The visual positioning method of the highly intelligent robot according to claim 1, wherein the corner pixels detected by the detection unit are sub-pixel level.
6. The visual positioning method of a highly intelligent robot according to claim 1, wherein the filtering unit is an extended kalman filter, and the result is sent to the visual mileage calculation unit by performing predictive judgment on the motion gesture of the highly intelligent robot.
7. The visual localization method of a highly intelligent robot according to claim 1, wherein the filtering unit predicts the pose of the highly intelligent robot with 6 degrees of freedom.
8. A visual positioning method of a highly intelligent robot, which is applied to the visual positioning system as claimed in any one of claims 1 to 7, and comprises the following steps:
step S1, continuously sensing the space environment around the high-intelligent robot, and outputting corresponding environment image information;
Step S2, performing corner detection on the environment image information to obtain corresponding image features and outputting image feature information;
Step S3, analyzing and matching characteristic points of the image characteristics, and outputting the position and posture information of the high-intelligent robot;
Step S4, judging a key frame of each frame of picture matched with the feature points, if the key frame is the key frame, enabling the high intelligent robot to enter a robust gesture, and turning to step S5; if the key frame is not found, the high intelligent robot enters a robust gesture, and the step S7 is carried out;
Step S5, obtaining camera parameters and world point coordinates by a local beam adjustment method by taking the feature point coordinates for front intersection detected by the environment sensing device as observation data;
Step S6, processing the optimal camera parameters and world point coordinates obtained by the local beam adjustment method, outputting new image features, and transmitting the acquired new image features back to the vision mileage calculation unit for new calculation;
And S7, returning to the step S3 after the high intelligent robot enters the robust gesture and transmitting gesture information to the filtering unit.
9. The visual positioning method of the highly intelligent robot according to claim 8, wherein the visual mileage calculation unit for image feature point analysis matching in the step S3 matches the image feature points of the left and right different camera angles.
10. The visual positioning method of a highly intelligent robot according to claim 8, wherein when the key frame judgment result in the step S4 is not a key frame, the highly intelligent robot enters a robust pose and transmits the pose information to an extended kalman filter unit.
CN202010822549.4A 2020-08-13 2020-08-13 Visual positioning system and method applied to high-intelligent robot Active CN111968157B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010822549.4A CN111968157B (en) 2020-08-13 2020-08-13 Visual positioning system and method applied to high-intelligent robot

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010822549.4A CN111968157B (en) 2020-08-13 2020-08-13 Visual positioning system and method applied to high-intelligent robot

Publications (2)

Publication Number Publication Date
CN111968157A CN111968157A (en) 2020-11-20
CN111968157B true CN111968157B (en) 2024-05-28

Family

ID=73389027

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010822549.4A Active CN111968157B (en) 2020-08-13 2020-08-13 Visual positioning system and method applied to high-intelligent robot

Country Status (1)

Country Link
CN (1) CN111968157B (en)

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111210463A (en) * 2020-01-15 2020-05-29 上海交通大学 Virtual wide-view visual odometer method and system based on feature point auxiliary matching
WO2020155615A1 (en) * 2019-01-28 2020-08-06 速感科技(北京)有限公司 Vslam method, controller, and mobile device

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2020155615A1 (en) * 2019-01-28 2020-08-06 速感科技(北京)有限公司 Vslam method, controller, and mobile device
CN111210463A (en) * 2020-01-15 2020-05-29 上海交通大学 Virtual wide-view visual odometer method and system based on feature point auxiliary matching

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
基于Realsense传感器的机器人视觉里程计研究;廖萱;陈锐志;李明;;地理空间信息(02);全文 *

Also Published As

Publication number Publication date
CN111968157A (en) 2020-11-20

Similar Documents

Publication Publication Date Title
Son et al. Integrated worker detection and tracking for the safe operation of construction machinery
US10068344B2 (en) Method and system for 3D capture based on structure from motion with simplified pose detection
KR101725060B1 (en) Apparatus for recognizing location mobile robot using key point based on gradient and method thereof
CN112785702A (en) SLAM method based on tight coupling of 2D laser radar and binocular camera
KR101077967B1 (en) Apparatus and method for surveillance and tracking
CN112085003B (en) Automatic recognition method and device for abnormal behaviors in public places and camera equipment
CN116630394B (en) Multi-mode target object attitude estimation method and system based on three-dimensional modeling constraint
CN108303096B (en) Vision-assisted laser positioning system and method
CN109300143B (en) Method, device and equipment for determining motion vector field, storage medium and vehicle
CN113568435B (en) Unmanned aerial vehicle autonomous flight situation perception trend based analysis method and system
CN110033489A (en) A kind of appraisal procedure, device and the equipment of vehicle location accuracy
He et al. Wearable ego-motion tracking for blind navigation in indoor environments
KR101030317B1 (en) Apparatus for tracking obstacle using stereo vision and method thereof
US11004211B2 (en) Imaging object tracking system and imaging object tracking method
JPH07262375A (en) Mobile object detector
EP2476999B1 (en) Method for measuring displacement, device for measuring displacement, and program for measuring displacement
Caldini et al. Smartphone-based obstacle detection for the visually impaired
CN113701750A (en) Fusion positioning system of underground multi-sensor
CN110751123A (en) Monocular vision inertial odometer system and method
US20180200614A1 (en) 6DoF Inside-Out Tracking Game Controller
CN111968157B (en) Visual positioning system and method applied to high-intelligent robot
Kim et al. Fast stereo matching of feature links
CN117593792A (en) Abnormal gesture detection method and device based on video frame
CN113379850B (en) Mobile robot control method, device, mobile robot and storage medium
Hasan et al. Range estimation in disparity mapping for navigation of stereo vision autonomous vehicle using curve fitting tool

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant