[go: up one dir, main page]
More Web Proxy on the site http://driver.im/

CN104835173A - Positioning method based on machine vision - Google Patents

Positioning method based on machine vision Download PDF

Info

Publication number
CN104835173A
CN104835173A CN201510263245.8A CN201510263245A CN104835173A CN 104835173 A CN104835173 A CN 104835173A CN 201510263245 A CN201510263245 A CN 201510263245A CN 104835173 A CN104835173 A CN 104835173A
Authority
CN
China
Prior art keywords
positioning label
image
label
positioning
machine vision
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201510263245.8A
Other languages
Chinese (zh)
Other versions
CN104835173B (en
Inventor
李新德
徐叶帆
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Southeast University
Original Assignee
Southeast University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Southeast University filed Critical Southeast University
Priority to CN201510263245.8A priority Critical patent/CN104835173B/en
Publication of CN104835173A publication Critical patent/CN104835173A/en
Application granted granted Critical
Publication of CN104835173B publication Critical patent/CN104835173B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/70Determining position or orientation of objects or cameras
    • G06T7/73Determining position or orientation of objects or cameras using feature-based methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/60Analysis of geometric attributes
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/80Analysis of captured images to determine intrinsic or extrinsic camera parameters, i.e. camera calibration
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • G06V10/48Extraction of image or video features by mapping characteristic values of the pattern into a parameter space, e.g. Hough transformation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20048Transform domain processing
    • G06T2207/20061Hough transform

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Geometry (AREA)
  • Multimedia (AREA)
  • Image Analysis (AREA)

Abstract

The invention discloses an AGV positioning method based on machine vision. The method comprises the steps: enabling an environment to be digitalized through the design of a positioning digital label firstly; obtaining an image comprising a label through a vehicle-mounted visual system; processing the image through an algorithm, and recognizing the position, content and deflection angle of the label secondly; and finally converting the position, content and deflecting angle into the position and posture of a car in an environment through the calibration of a distance relation table of a camera and the related coordinate conversion, thereby achieving the positioning of the car in the environment. The method is different from other AGV positioning methods based on machine vision, and is small in computation burden in a recognition process. Moreover, the positioning precision and speed can meet the requirements of AGV navigation.

Description

A kind of localization method based on machine vision
Technical field
The present invention relates to a kind of field of machine vision, particularly a kind of localization method.
Background technology
Machine vision utilizes vision sensor obtain picture and utilize image processing system to carry out various measurement and judgement.It is an important branch of Computer Subject, combines the technology of the aspects such as optics, machinery, electronics, computer software and hardware, relates to multiple fields such as computing machine, image procossing, pattern-recognition, artificial intelligence, signal transacting, optical, mechanical and electronic integration.Vision guided navigation is that the image that vision sensor obtains is carried out to respective handling thus obtains a kind of technology of carrier pose parameter.The vision guided navigation technology of current mobile robot is mainly used in the contest of competing for speed of mobile robot, industrial AGV, the independent navigation of intelligent vehicle and these four aspects of science and techniques of defence technical research.
The transport of current articles from the storeroom mainly relies on manpower, there is the problems such as inefficiency, waste of human resource be serious, therefore in the urgent need to industrial automatical pilot transportation vehicle (Autonomous Guide Vehicle, be called for short AGV) replace this work on the one hand, enhance productivity and resource utilization.The develop rapidly of machine vision is that the self-navigation of industrial AGV provides the thinking of dealing with problems more.Machine vision navigation system for industrial AGV self-navigation can be divided into usually: image acquisition part, image processing section and motion control portion.
The main process of a complete machine vision navigation system is as follows:
1, camera is according to instruction real-time image acquisition, automatically regulates exposure parameter as required;
2, be picture format by the data transformations collected, and leave in processor or calculator memory;
3, processor is analyzed image, identifies, is obtained carrier posture information and interrelated logic controlling value;
4, recognition result control carrier moves, stops, correcting kinematic error etc.
Summary of the invention
Goal of the invention: in order to overcome the deficiencies in the prior art, the invention provides a kind of localization method based on machine vision, apply the present invention in AGV, for solving the technical matters that traditional hand haulage's work efficiency is low, waste of human resource, cost are high.
Technical scheme: for achieving the above object, the technical solution used in the present invention is:
Based on a localization method for machine vision, comprise the steps:
Step one, camera is set on object to be positioned, and camera is demarcated, obtain the demarcation relation table between each pixel relative position and actual relative position in image;
Step 2, Design Orientation label, and positioning label is placed in object to be positioned institute in the environment, positioning label content comprises positional information and the directional information of self position;
Step 3, utilize camera to take place environment, obtain the image comprising positioning label, analysis chart picture obtains the content of positioning label position in image, direction and positioning label;
Step 4, solve the relative position relation of positioning label in picture centre and image, in conjunction with the content of positioning label, obtain the pose of picture centre in actual environment.
Further, in the present invention, the calibration process of camera comprises following steps:
Step one, shooting standard calibration image;
Step 2, evenly choose monumented point based on the grid in standard calibration image, record the location of pixels of each monumented point in standard calibration image and its physical location;
Step 3, according to the location of pixels of monumented point and physical location, set up and demarcate relation table.
Further, in the present invention, positioning label profile is square, color category on it has 2 kinds, and positioning label is made up of housing and the inner multiple color lump of housing, and each color lump has a kind of color, housing is a kind of color, by the array mode reflection label substance of different color blocks.
Further, in the present invention, analysis chart as time, image is carried out binary conversion treatment, according to the pixel that in colouring information extraction image may be positioning label, carry out connected domain detection, filter out positioning label in conjunction with connected domain size, the ratio of width to height, position and ambient background, obtain its position in the picture; By Hough transformation, straight-line detection is carried out to positioning label, obtain the direction of positioning label in image; Then the content of positioning label is read.
Beneficial effect:
The present invention passes through Design Orientation label by environment digitizing, camera obtains the image comprising positioning label, use certain algorithm to process image and identify the position of positioning label, direction and content, position between positioning label, angular relationship is according to image center, the conversion of the demarcation relation table of combining camera, dependent coordinate obtains determinand position in the environment and attitude information, realize AGV location in the environment, realize having very important significance to the robotization of dolly transport.
The present invention utilizes machine vision to replace artificial method, utilize the feature that camera working stability, cost are low, by certain handling procedure, use advanced image processing algorithm, can greatly improve positioning precision and speed, reduce the degree of dependence of articles from the storeroom transport to people, raise labour productivity.
Accompanying drawing explanation
Fig. 1 is process flow diagram of the present invention;
Fig. 2 is positioning label schematic diagram;
Fig. 3 is positioning label Region dividing schematic diagram;
Fig. 4 is positioning label content schematic diagram;
Fig. 5 is the uncalibrated image apart from taking during target 3m;
Fig. 6 is the gauge point apart from target 3m uncalibrated image;
Fig. 7 is image processing module process flow diagram;
Fig. 8 is positioning label region image;
Fig. 9 is positioning label angle calculation schematic diagram;
Figure 10 is four kinds of directions of positioning label after rotating.
Embodiment
Below in conjunction with accompanying drawing, the present invention is further described.
1 Design Orientation label
The positioning label of this method design as shown in Figure 2, containing 16 binary messages, is divided into 3 parts: 10 bit data positions, 4 direction flags, 2 bit check positions.Wherein data bit represents positioning label position in the environment, Directional Sign bit representation positioning label direction in the environment.
1.1 positioning label size shape
The positioning label of the present invention's design comprises red white two kinds of colors, and size is the square of 18cm*18cm, is divided into two regions: rim area and data field.As shown in Figure 3, outside hatched example areas is rim area, and middle is data field without hatched example areas.The red boxes of to be width be in rim area 2.25cm, data field is the red white square of 16 3.375cm*3.375cm by 4*4 distribution, and after two-value process, diamonds represents 0, white square represents 1.
1.2 positioning label contents
As shown in Figure 4, by 16 square number consecutivelies, because color lump color is different, therefore data field composition comprises the binary message A of 16 to positioning label content 0a 1a 2a 3a 4a 5a 6a 7a 8a 9a 10a 11a 12a 13a 14a 15, be divided into 3 parts: 10 bit data position A 2a 4a 5a 6a 7a 9a 10a 11a 13a 14, 4 direction flag A 0a 3a 12a 15and 2 bit check position A 1a 8.There are 1024 kinds of various combinations 10 bit data positions, can represent 1024 different labels.4 direction flags are in four summits respectively, and select one of them summit, and in all positioning labels, the color on this summit is different from the color on other 3 summits, may be used for like this later stage uniquely determine positioning label towards, the present embodiment chooses A 15for white, A 0a 3a 12be redness.Whether 2 bit check positions are correct for verifying 10 bit data positions, and those skilled in the art can select concrete verification mode as required voluntarily.
1.3 environment digitizings
AGV fixes wide angle camera, the field range of this camera when distance objective 3m can reach 4m × 3m, therefore above AGV running region, positioning label is pasted according to the distance of 3m × 3m, data bit and himself coordinate in the environment and positional information one_to_one corresponding in each positioning label, and positioning label is towards being consistent, namely the position of direction flag is unified and placing direction consistent, the digitizing of environment of finishing the work.Immediately below positioning label, aim at positioning label take pictures, now the travel direction of AGV is positive dirction, the angular relationship of position, positioning label direction relative positioning tag hub in the photo of acquisition of simultaneously taking pictures.
2 calibration for cameras
In order to ensure that AGV can obtain image clearly in motion process, therefore camera adopts the industrial camera of full frame exposure mode, able to programmely like this arranges the time shutter.Large as far as possible for ensureing to obtain image range, the camera lens of employing is wide-angle lens, and field angle is about 90 °.Wide-angle lens causes pattern distortion larger, the actual positional relationship of 2 directly can not be obtained by 2 position relationships in image, therefore the first step of the present invention is calibration for cameras, sets up the relation table of pixel distance and actual range, and the position relationship of each point in image is converted into actual positional relationship.
2.1 pictures taken
The actual range of measurement image mid point and point for convenience, photographic subjects elects latticed metope as.Because the distance of camera distance target is different, the scope of shooting also can be different, therefore the present invention's equidistant shooting 10 pictures between camera distance target 2.5 meters and 3.4 meters, 10 different distances are demarcated respectively, makes can to select according to each shooting distance in later stage actual use procedure corresponding apart from corresponding demarcation relation table like this.Take the uncalibrated image obtained when Figure 5 shows that camera distance target 3m, whether the image that in figure, the center line of level and vertical direction is used for observing shooting aligns.
2.2 mark pictures
Measure width and the height of each grid of reference object, select some gauge points according to certain density, as shown in Fig. 6 intermediate cam shape.
2.3 opening relationships tables
Take picture centre as benchmark, the location of pixels of each gauge point in record Fig. 6, residing for each gauge point, the position of square can obtain the physical location at their range image centers, by location of pixels and physical location one_to_one corresponding, is recorded in and demarcates in relation table.
When practical application, assuming that central point pixel coordinate positioning label in image being detected is (x label, y label), its four nearest pixel of detection range in demarcation relation table: upper left angle point (x tl, y tl), upper right angle point (x tr, y tr), lower-left angle point (x bl, y bl) and bottom right angle point (x br, y br), the physical location according to demarcating their relative image centers of relation table acquisition is respectively (X tl, Y tl), (X tr, Y tr), (X bl, Y bl), (X br, Y br).According to the location of pixels of the central point of positioning label relatively above-mentioned four nearest pixels in the picture, geometric proportion relation is utilized to obtain the physical location at its relative image center for (X label, Y label).
3 positioning label identification and process
The present invention obtains the image containing positioning label, use digital image analysis treatment technology, the true bearing of image center location (AGV position) relative to positioning label is obtained by the position of positioning label in image and direction calculating, specifically which positioning label can be determined by identifying positioning label content, namely can know the real coordinate position that this positioning label is corresponding, both combine and can calculate the position of AGV in actual environment and direction.Image processing module process flow diagram as shown in Figure 7.
3.1 pre-service
First the picture of top shooting acquisition containing positioning label of running region is aimed at by wide angle camera, can effectively by image binaryzation by the red pixel information extracted in picture.
Image after binaryzation is first corroded to the process of rear expansion, can effective filter background noise, improve the smoothness of label edges.
3.2 positioning label information identifications
Theoretical foundation: image recognition.The image recognition first step extracts effective characteristics of image, and here, we carry out straight-line detection mainly through colouring information and Hough transformation, the image detected and positioning label image are analyzed, obtain recognition result.
3.2.1 positioning label location recognition
Connected domain detection is carried out to the two-value picture carrying out ground unrest filtration, the position of each connected domain in image, width and height can be obtained.Whether connected domain is that positioning label mainly contains following standard:
1, the size of connected domain.Positioning label size is fixed, and after camera is fixing, positioning label size in the picture should be within limits;
2, the ratio of width to height of connected domain.The ratio of width to height of positioning label self is 1:1, and because the present invention adopts wide-angle lens, the distortion in region, image border is comparatively large, so be set as between 0.3-1.7 by the ratio of width to height;
3, positioning label position.Owing to adopting wide-angle lens, image border distortion is comparatively large, if connected domain appears at image edge location, then not it can be used as positioning label to identify, otherwise affects positioning label content recognition rate;
4, positioning label ambient background.Be non-red metope around positioning label, therefore in bianry image, should be black background around positioning label.According to this feature, background detection is carried out to the connected domain meeting aforementioned 3 conditions.Determine that connected domain surrounds by a square frame in the position of distance connected domain each about 4 pixels up and down.Background parts black picture element proportion in the pixel that decisional block comprises beyond connected domain, whether more than 98%, if exceed, is identified as positioning label, otherwise nonrecognition is positioning label.
Further, by the width of connected domain, height and vertex position, geometric proportion relation is utilized can to determine this positioning label center position (x in the picture label, y label).
3.2.2 positioning label direction discernment
Can be obtained may being the connected domain of positioning label in image by the identifying of positioning label position above, and the width W of connected domain l, height H lwith the center point coordinate (x of positioning label label, y label).With positioning label central point (x label, y label) centered by, a selected width is 2*W l, be highly 2*H lregion, as shown in Fig. 8 (a).In this region, bianry image is further processed, the direction of positioning label in recognition image, operation specific as follows.
Carry out Hough transformation in the region at positioning label place and carry out straight-line detection, the straight line obtained should be parallel to each other or vertically, filter above-mentioned undesirable straight line, can obtain the edge line of positioning label.
By observing the direction flag of positioning label, the photo that contrast was originally taken when environment digitizing, if angle change occurs at the center of direction flag relative positioning label, then can judge that the positive dirction of AGV direction in the environment and regulation is at present inconsistent.
Concrete AGV angle there occurs great change, by observing the anti-deflection angle pushing away AGV of deflection angle of positioning label.
As shown in Figure 9, in order to unifiedly calculate, using the positioning label edge line on the right side of direction flag in Fig. 4 as line of reference L, there are angle 90 ° of-α in the vertical direction observing line of reference L and picture in picture, be rotated counterclockwise α around the center of positioning label and make the edge of positioning label and the sides aligned parallel of picture or vertical, but because positioning label contains directional information, 4 direction flag A 0a 3a 12a 15in only have A 15white, its excess-three square is red, therefore four kinds of situations as shown in Figure 10 may be there are after rotating, make positioning label direction entirely true namely consistent with the actual putting position of direction flag, positioning label also needs anglec of rotation θ again, (a) (b) (c) (d) 4 kinds of situations in corresponding Figure 10, the value of θ is respectively 0 °, 90 °, 180 °, 270 °.Namely the in the right direction of positioning label can be guaranteed after label anglec of rotation α+θ in fig .9.Be back-calculated to obtain the deflection that the relative positive dirction of AGV there occurs α+θ, and yawing moment is counterclockwise.
3.2.3 label substance identification
According to the angle α calculated, rotate the area image at positioning label place, and be enlarged into 9 times of former figure, to improve recognition accuracy, as shown in Fig. 8 (b).Rotating and in area image after amplifying, carrying out binary conversion treatment by redness, carry out repeatedly dilation erosion operation, fill up positioning label empty square, as shown in Fig. 8 (c).Connected domain is detected, the position of the positioning label after being amplified in area image, positioning label width W in Fig. 8 (c) nLwith positioning label height H nL.
By the content of position detection and location label in Fig. 8 (b) of positioning label in Fig. 8 (c).Positioning label border width used in the present invention is 2.25cm, and the width of data field square is 3.375cm, so the composition of the every a line of positioning label or each row is 2:3:3:3:3:2.According to calculating the width W of amplifying rear label above nLand height H nL, in the content of this ratio identification 16 data block, for the 7th (the 2nd row the 3rd arranges) data block, recognition methods is described below:
1, determine each square Width x Height, the width that can calculate each data block according to ratio is above (3/16) * W nL, be highly (3/16) * H nL.
2, determine the 7th square position in the label, the central point distance label left side edge of the 7th square should be (2/16+6/16+ (3/16)/2) * W nL, i.e. (19/32) * W nL, distance label upper edge should be (2/16+3/16+ (3/16)/2) * H nL, i.e. (13/32) * H nL.
3, the analytic ability due to camera is limited, the color of red white adjacent area judges to there is certain error, in addition because image may produce distortion, the square of positioning label not necessarily standard, therefore only identify square central area when identifying each square, the width of central area and be highly respectively square width and highly 1/5 to 3/5.
Whether be red, if not the quantity of red pixel point exceedes the certain proportion of pixel sum, then the 7th data block be identified as " 1 ", otherwise be " 0 " if in the central area 4, calculated above, detecting each pixel.
4 calculate relative position relation
Fig. 9 intermediate cam shape represents the center of image, according to picture centre coordinate and tag hub coordinate (x label, y label), the angle β of both calculating line and vertical direction.
After being rotated counterclockwise the common α+θ angle of line of reference L, positioning label is made to rotate to position, direction correct, and then after continuing to be rotated counterclockwise β angle, make line of reference L coincide with the line of picture centre M and tag hub, the angle namely obtaining the center of picture centre M relative positioning label is alpha+beta+θ.
According to picture centre coordinate and positioning label centre coordinate (x label, y label) calculate the pixel distance of 2, utilize the demarcation relation table of calibration for cameras, be translated into actual range m, in conjunction with the angle alpha+beta+θ of the picture centre relative positioning label calculated above, its position relationship is converted into rectangular coordinate relation, obtain the physical location of AGV relative positioning label, from positioning label content at the real coordinate position (X demarcating Search and Orientation tag hub relative image center relation table label, Y label), the physical location (X of AGV at working environment can be obtained by formula (1) aGV, Y aGV).
X AGV = X label + m × sin ( α + β + θ ) Y AGV = Y label + m × cos ( α + β + θ ) - - - ( 1 )
To sum up obtain, the position (X of AGV aGV, Y aGV) and deflection angle α+θ.
The above is only the preferred embodiment of the present invention; be noted that for those skilled in the art; under the premise without departing from the principles of the invention, can also make some improvements and modifications, these improvements and modifications also should be considered as protection scope of the present invention.

Claims (4)

1. based on a localization method for machine vision, it is characterized in that: comprise the steps:
Step one, camera is set on object to be positioned, and camera is demarcated, obtain the demarcation relation table between each pixel relative position and actual relative position in image;
Step 2, Design Orientation label, and positioning label is placed in object to be positioned institute in the environment, positioning label content comprises positional information and the directional information of self position;
Step 3, utilize camera to take place environment, obtain the image comprising positioning label, analysis chart picture obtains the content of positioning label position in image, direction and positioning label;
Step 4, solve the relative position relation of positioning label in picture centre and image, in conjunction with the content of positioning label, obtain the pose of picture centre in actual environment.
2. the localization method based on machine vision according to claim 1, is characterized in that: the calibration process of camera comprises following steps:
Step one, shooting standard calibration image;
Step 2, evenly choose monumented point based on the grid in standard calibration image, record the location of pixels of each monumented point in standard calibration image and its physical location;
Step 3, according to the location of pixels of monumented point and physical location, set up and demarcate relation table.
3. the localization method based on machine vision according to claim 1, it is characterized in that: positioning label profile is square, color category on it has 2 kinds, and positioning label is made up of housing and the inner multiple color lump of housing, each color lump has a kind of color, housing is a kind of color, by the array mode reflection label substance of different color blocks.
4. the localization method based on machine vision according to claim 3, it is characterized in that: analysis chart as time, image is carried out binary conversion treatment, according to the pixel that in colouring information extraction image may be positioning label, carry out connected domain detection, filter out positioning label in conjunction with connected domain size, the ratio of width to height, position and ambient background and obtain its position in the picture; By Hough transformation, straight-line detection is carried out to positioning label, obtain the direction of positioning label in image; Then the content of positioning label is read.
CN201510263245.8A 2015-05-21 2015-05-21 A kind of localization method based on machine vision Active CN104835173B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201510263245.8A CN104835173B (en) 2015-05-21 2015-05-21 A kind of localization method based on machine vision

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201510263245.8A CN104835173B (en) 2015-05-21 2015-05-21 A kind of localization method based on machine vision

Publications (2)

Publication Number Publication Date
CN104835173A true CN104835173A (en) 2015-08-12
CN104835173B CN104835173B (en) 2018-04-24

Family

ID=53813038

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201510263245.8A Active CN104835173B (en) 2015-05-21 2015-05-21 A kind of localization method based on machine vision

Country Status (1)

Country Link
CN (1) CN104835173B (en)

Cited By (19)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105243665A (en) * 2015-10-10 2016-01-13 中国科学院深圳先进技术研究院 Robot biped positioning method and apparatus
CN106225787A (en) * 2016-07-29 2016-12-14 北方工业大学 Unmanned aerial vehicle visual positioning method
CN106500714A (en) * 2016-09-22 2017-03-15 福建网龙计算机网络信息技术有限公司 A kind of robot navigation method and system based on video
CN106595634A (en) * 2016-11-30 2017-04-26 深圳市有光图像科技有限公司 Method for recognizing mobile robot by comparing images and mobile robot
CN108280853A (en) * 2018-01-11 2018-07-13 深圳市易成自动驾驶技术有限公司 Vehicle-mounted vision positioning method, device and computer readable storage medium
CN109308072A (en) * 2017-07-28 2019-02-05 杭州海康机器人技术有限公司 The Transmission Connection method and AGV of automated guided vehicle AGV
CN109313417A (en) * 2015-11-16 2019-02-05 Abb瑞士股份有限公司 Help robot localization
CN110006420A (en) * 2018-05-31 2019-07-12 上海快仓智能科技有限公司 Build drawing method, image acquisition and processing system and localization method
WO2019154435A1 (en) * 2018-05-31 2019-08-15 上海快仓智能科技有限公司 Mapping method, image acquisition and processing system, and positioning method
CN110658215A (en) * 2019-09-30 2020-01-07 武汉纺织大学 PCB automatic splicing detection method and device based on machine vision
CN110887488A (en) * 2019-11-18 2020-03-17 天津大学 Unmanned rolling machine positioning method
CN111273052A (en) * 2020-03-03 2020-06-12 浙江省特种设备科学研究院 Escalator handrail speed measurement method based on machine vision
CN112446916A (en) * 2019-09-02 2021-03-05 北京京东乾石科技有限公司 Method and device for determining parking position of unmanned vehicle
CN112902843A (en) * 2021-02-04 2021-06-04 北京创源微致软件有限公司 Label attaching effect detection method
CN113343962A (en) * 2021-08-09 2021-09-03 山东华力机电有限公司 Visual perception-based multi-AGV trolley working area maximization implementation method
CN113554591A (en) * 2021-06-08 2021-10-26 联宝(合肥)电子科技有限公司 Label positioning method and equipment
CN113758423A (en) * 2021-11-10 2021-12-07 风脉能源(武汉)股份有限公司 Method for determining position of image acquisition equipment based on image inner scale
CN113781566A (en) * 2021-09-16 2021-12-10 北京清飞科技有限公司 Positioning method and system for automatic image acquisition trolley based on high-speed camera vision
CN115872018A (en) * 2022-12-16 2023-03-31 河南埃尔森智能科技有限公司 Electronic tray labeling and deviation rectifying system and method based on 3D visual sensing

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20100141767A1 (en) * 2008-12-10 2010-06-10 Honeywell International Inc. Semi-Automatic Relative Calibration Method for Master Slave Camera Control
CN102773862A (en) * 2012-07-31 2012-11-14 山东大学 Quick and accurate locating system used for indoor mobile robot and working method thereof
CN103400373A (en) * 2013-07-13 2013-11-20 西安科技大学 Method for automatically identifying and positioning coordinates of image point of artificial mark in camera calibration control field
CN103729892A (en) * 2013-06-20 2014-04-16 深圳市金溢科技有限公司 Vehicle positioning method and device and processor
CN103994762A (en) * 2014-04-21 2014-08-20 刘冰冰 Mobile robot localization method based on data matrix code

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20100141767A1 (en) * 2008-12-10 2010-06-10 Honeywell International Inc. Semi-Automatic Relative Calibration Method for Master Slave Camera Control
CN102773862A (en) * 2012-07-31 2012-11-14 山东大学 Quick and accurate locating system used for indoor mobile robot and working method thereof
CN103729892A (en) * 2013-06-20 2014-04-16 深圳市金溢科技有限公司 Vehicle positioning method and device and processor
CN103400373A (en) * 2013-07-13 2013-11-20 西安科技大学 Method for automatically identifying and positioning coordinates of image point of artificial mark in camera calibration control field
CN103994762A (en) * 2014-04-21 2014-08-20 刘冰冰 Mobile robot localization method based on data matrix code

Cited By (27)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105243665A (en) * 2015-10-10 2016-01-13 中国科学院深圳先进技术研究院 Robot biped positioning method and apparatus
CN109313417B (en) * 2015-11-16 2021-09-24 Abb瑞士股份有限公司 Aiding in robot positioning
CN109313417A (en) * 2015-11-16 2019-02-05 Abb瑞士股份有限公司 Help robot localization
CN106225787A (en) * 2016-07-29 2016-12-14 北方工业大学 Unmanned aerial vehicle visual positioning method
CN106225787B (en) * 2016-07-29 2019-03-29 北方工业大学 Unmanned aerial vehicle visual positioning method
CN106500714A (en) * 2016-09-22 2017-03-15 福建网龙计算机网络信息技术有限公司 A kind of robot navigation method and system based on video
CN106595634A (en) * 2016-11-30 2017-04-26 深圳市有光图像科技有限公司 Method for recognizing mobile robot by comparing images and mobile robot
CN109308072A (en) * 2017-07-28 2019-02-05 杭州海康机器人技术有限公司 The Transmission Connection method and AGV of automated guided vehicle AGV
CN108280853A (en) * 2018-01-11 2018-07-13 深圳市易成自动驾驶技术有限公司 Vehicle-mounted vision positioning method, device and computer readable storage medium
WO2019154435A1 (en) * 2018-05-31 2019-08-15 上海快仓智能科技有限公司 Mapping method, image acquisition and processing system, and positioning method
CN110006420A (en) * 2018-05-31 2019-07-12 上海快仓智能科技有限公司 Build drawing method, image acquisition and processing system and localization method
CN110006420B (en) * 2018-05-31 2024-04-23 上海快仓智能科技有限公司 Picture construction method, image acquisition and processing system and positioning method
CN112446916A (en) * 2019-09-02 2021-03-05 北京京东乾石科技有限公司 Method and device for determining parking position of unmanned vehicle
CN112446916B (en) * 2019-09-02 2024-09-20 北京京东乾石科技有限公司 Method and device for determining parking position of unmanned vehicle
CN110658215A (en) * 2019-09-30 2020-01-07 武汉纺织大学 PCB automatic splicing detection method and device based on machine vision
CN110887488A (en) * 2019-11-18 2020-03-17 天津大学 Unmanned rolling machine positioning method
CN111273052A (en) * 2020-03-03 2020-06-12 浙江省特种设备科学研究院 Escalator handrail speed measurement method based on machine vision
CN112902843A (en) * 2021-02-04 2021-06-04 北京创源微致软件有限公司 Label attaching effect detection method
CN113554591A (en) * 2021-06-08 2021-10-26 联宝(合肥)电子科技有限公司 Label positioning method and equipment
CN113554591B (en) * 2021-06-08 2023-09-01 联宝(合肥)电子科技有限公司 Label positioning method and device
CN113343962B (en) * 2021-08-09 2021-10-29 山东华力机电有限公司 Visual perception-based multi-AGV trolley working area maximization implementation method
CN113343962A (en) * 2021-08-09 2021-09-03 山东华力机电有限公司 Visual perception-based multi-AGV trolley working area maximization implementation method
CN113781566A (en) * 2021-09-16 2021-12-10 北京清飞科技有限公司 Positioning method and system for automatic image acquisition trolley based on high-speed camera vision
CN113758423A (en) * 2021-11-10 2021-12-07 风脉能源(武汉)股份有限公司 Method for determining position of image acquisition equipment based on image inner scale
CN113758423B (en) * 2021-11-10 2022-02-15 风脉能源(武汉)股份有限公司 Method for determining position of image acquisition equipment based on image inner scale
CN115872018A (en) * 2022-12-16 2023-03-31 河南埃尔森智能科技有限公司 Electronic tray labeling and deviation rectifying system and method based on 3D visual sensing
CN115872018B (en) * 2022-12-16 2024-06-25 河南埃尔森智能科技有限公司 Electronic tray labeling correction system and method based on 3D visual sensing

Also Published As

Publication number Publication date
CN104835173B (en) 2018-04-24

Similar Documents

Publication Publication Date Title
CN104835173A (en) Positioning method based on machine vision
US11625851B2 (en) Geographic object detection apparatus and geographic object detection method
CN102773862B (en) Quick and accurate locating system used for indoor mobile robot and working method thereof
CN103411553B (en) The quick calibrating method of multi-linear structured light vision sensors
US20230236280A1 (en) Method and system for positioning indoor autonomous mobile robot
CN107689061A (en) Rule schema shape code and localization method for indoor mobile robot positioning
CN101398907B (en) Two-dimension code structure and decoding method for movable robot
CN202702247U (en) Rapid and accurate positioning system used for indoor mobile robot
CN106990776B (en) Robot homing positioning method and system
CN109446973B (en) Vehicle positioning method based on deep neural network image recognition
CN112648976B (en) Live-action image measuring method and device, electronic equipment and storage medium
LU500407B1 (en) Real-time positioning method for inspection robot
CN113052903A (en) Vision and radar fusion positioning method for mobile robot
Wang et al. Autonomous landing of multi-rotors UAV with monocular gimbaled camera on moving vehicle
CN114415736A (en) Multi-stage visual accurate landing method and device for unmanned aerial vehicle
CN111811502B (en) Motion carrier multi-source information fusion navigation method and system
CN111316324A (en) Automatic driving simulation system, method, equipment and storage medium
CN109737962B (en) Machine vision autonomous positioning method and system based on special circular ring coding
CN111964681B (en) Real-time positioning system of inspection robot
CN116736259A (en) Laser point cloud coordinate calibration method and device for tower crane automatic driving
CN114511620B (en) Structure displacement monitoring method based on Mask R-CNN
Yuan et al. Estimation of vehicle pose and position with monocular camera at urban road intersections
CN102168973A (en) Automatic navigating Z-shaft positioning method for omni-directional vision sensor and positioning system thereof
Jende et al. Low-level tie feature extraction of mobile mapping data (MLS/images) and aerial imagery
Lee et al. Semi-automatic framework for traffic landmark annotation

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
EXSB Decision made by sipo to initiate substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant