The content of the invention
In order to solve the above technical problems, the present invention provides a kind of robot localization and map based on ceiling image information
Constructing system, the consecutive image change of acquisition is more apparent, facilitates map structuring.
The robot localization based on ceiling image information of the present invention and map structuring system, including:Image obtains mould
Block, characteristic extracting module, feature tracking module and memory management module;
Described image acquisition module, is arranged in robot, and its optical axis is perpendicular to ceiling, in robot motion
Ceiling is shot, current ceiling image information is obtained;
Described image processing module, for handling the ceiling image information that described image acquisition module is obtained;
The characteristic extracting module, for extracting feature in the image information after the processing of described image processing module, is obtained
To set of image characteristics;
The feature tracking module, each figure that the characteristics of image for being extracted to the characteristic extracting module is concentrated
As feature carries out matched jamming, the set of image characteristics after being tracked;
The memory management module, for storing the set of image characteristics that the feature tracking module is obtained, and according to described
Set of image characteristics map making.
Wherein, described image is characterized as Corner Feature, and described image feature set is Corner Feature collection.
Wherein, the feature tracking module includes optical flow tracking module and correction module;
The optical flow tracking module, the Corner Feature collection for tracking present image by Lucas-Kanade optical flow methods
In each Corner Feature, obtain track Corner Feature collection;
The correction module, for it is described state optical flow tracking module obtain tracking Corner Feature collection be corrected.
Further, the correction module extracts Corner Feature again on current frame image, obtains new Corner Feature
Collection, concentrates each Corner Feature to be concentrated with new Corner Feature the tracking Corner Feature obtained by the optical flow tracking module
The Corner Feature that is overlapped in the range of preset window of Corner Feature as the Corner Feature collection after correction.
Or, the correction module includes the first tracking cell, the second tracking cell and judging unit:
If current frame image is image imgA, its Corner Feature integrates as A, and next two field picture is imgB;
First tracking cell, for based on Corner Feature collection A, angle point to be obtained in next two field picture imgB tracking
Feature set B;
Second tracking cell, for based on Corner Feature collection B, angle point to be obtained in current frame image imgA tracking
Feature set B2;
The judging unit, for judging whether each Corner Feature label of the Corner Feature collection A is special with the angle point
Each Corner Feature label collected in B2 is identical, if identical, takes label identical Corner Feature to add preparation Corner Feature collection
In.
Or, the correction module includes extraction unit, predicting unit, comparison unit:
The extraction unit, the Corner Feature collection B2 for extracting present image;Some Corner Feature has been in B2
The tracking Corner Feature collection B that the present image is obtained is tracked through being present in;
The predicting unit, the Corner Feature collection for predicting present image according to the Corner Feature in Corner Feature storehouse
B’;
The Corner Feature of the comparison unit, for contrasting B2, B and B ', B and B will not be appeared in B2 ' is judged as newly
Corner Feature, will be added in Corner Feature storehouse by the new Corner Feature of threshold value.
Beneficial effects of the present invention are compared with prior art:It is provided in an embodiment of the present invention to be based on ceiling image information
Robot localization and map structuring system, due to the direction of video camera to be changed to the fortune in direction, i.e. robot to ceiling
Optical axis of the direction perpendicular to video camera is moved, therefore the consecutive image change obtained is more apparent, facilitates map structuring;Further,
When tracking characteristics of image, in order to obtain more preferable tracking information, traditional tracking result is corrected, makes what tracking was obtained
Corner Feature is more stablized and accurate, and the precision of map structuring is also higher, that is, the map built is also just more accurate.
Embodiment
With reference to the accompanying drawings and examples, the embodiment to the present invention is described in further detail.Implement below
Example is used to illustrate the present invention, but is not limited to the scope of the present invention.
Referring to Fig. 1, the embodiments of the invention provide a kind of robot localization based on ceiling image information and map structure
System is built, the acquisition of vision information direction of the system is towards ceiling so that the consecutive image change of acquisition greatly, is easy to map structure
Build.The system is specifically included:Image collection module 1, image processing module 2, characteristic extracting module 3 and is deposited feature tracking module 4
Store up management module 5;
Wherein, image collection module 1, is arranged in robot, and its optical axis is perpendicular to ceiling, in robot motion
When ceiling is shot, obtain current ceiling image information;
Image collection module 1 can be mounted in robot, optical axis is perpendicular to video camera of ceiling etc..
Image processing module 2, for handling the ceiling image information that image collection module 1 is obtained;
Specifically, image processing module 2 is obtained to image collection module 1 ceiling image information carry out gray proces,
Distortion correction and cutting so that characteristic extracting module 3 is convenient to extract characteristics of image.
Image in the indoor ceiling video of shooting is the coloured image based on RGB color.Such image is not
Only memory space is big, while being also not easy to be handled.Therefore need these coloured images being converted to processing more easily
Gray-scale map.
Gray-scale map is a kind of monochrome image with 256 grades of tonal gradations from black to white.Each pixel in image is with 8
Data represent, therefore the gray value of pixel is one kind in 256 kinds of gray scales between black and white.Gray-scale map only has gray scale etc.
Level, without the change of color.
In YUV color spaces, Y-component represents the brightness degree of pixel, can express figure by this brightness value
The gray value of picture.According to RGB and YUV color space conversion formula, RGB image and the corresponding relation of Y-component can be obtained, it is such as public
Shown in formula:
(1)
RGB image can be converted to by gray-scale map according to formula (1).
Further, realized because image collection module 1 obtains image by optical principle, based on existing optics skill
The limitation of art level, it is impossible to produce perfect lens, and a lens and imager keeping parallelism also are difficult to from mechanical aspects, this
A little situations can cause lens distortion.For example:Should be that the lines of straight line are bent in picture, this distortion can be to vision
SLAM systems produce influence, because vision SLAM needs the characteristics of image in image information accurate as far as possible, and the meeting pair that distorts
Extraction, tracking and map structuring on characteristics of image are impacted.For the Corner Feature of tracking, the shadow of this distortion
Sound can make the Corner Feature that traces into produce the skew relative to actual environment, although displacement is smaller, on the move can be with
Self-recision is carried out, but for the robot matched according to Corner Feature is repositioned, Corner Feature distance can be influenceed
Judge.Therefore before using video camera, image collection module can be demarcated, obtains distort matrix, the figure got
As information is corrected using distortion matrix.
Simultaneously as it is bigger apart from the more remote distortion of image center location in the image obtained, for the ease of characteristics of image
Storage and the extraction of reduction wall feature are, it is necessary to the actual picture model for feature extraction of ratio that the image range of acquisition is set
Enclose small, that is, do not carry out feature extraction and tracking processing to image peripheral part, and when characteristics of image to be preserved, Ke Yiquan
Portion is preserved.
Characteristic extracting module 3, for extracting characteristics of image in the image information after the processing of image processing module 2, is obtained
Set of image characteristics;
Characteristics of image has Corner Feature, line feature, edge feature, textural characteristics, color lump feature etc., due to Corner Feature
Extract and using fairly simple quick, also allow for storing and applied to map structuring, therefore the embodiment of the present invention is with characteristics of image
To be illustrated exemplified by Corner Feature.
Specifically, when characteristics of image is Corner Feature, characteristic extracting module 3 can be carried out using Harris angle points method
Extract.Further, can be using sub-pix angular-point detection method to logical in order to be able to obtain the Corner Feature information of higher precision
Cross Harris angle point grids to Corner Feature be modified, obtain the exact position of Corner Feature.Harris angle point grids and
Sub-pix angular-point detection method belongs to the common knowledge of those skilled in the art, therefore is no longer described in detail in the present embodiment
Journey.
Feature tracking module 4, each characteristics of image that the characteristics of image for being extracted to characteristic extracting module 3 is concentrated
Carry out matched jamming, the set of image characteristics after being tracked;
Referring to Fig. 2, the embodiment of the present invention additionally provides another robot localization based on ceiling image information and ground
Figure constructing system schematic diagram, wherein, feature tracking module 4 include optical flow tracking module 41 and correction module 42, wherein light stream with
Track module 41, for by every in Corner Feature collection cornersA in Lucas-Kanade optical flow methods tracking present image imgA
One Corner Feature information, obtains tracking Corner Feature collection cornersB.
Specifically, optical flow tracking module 41 is realized by optical flow method and tracked, and light stream is referred to:Object band optical signature position
Mobile projector express the change of image to light stream, light stream is just formed on the plane of delineation, it contains the information of moving object.
Optical flow method comprises the following steps:
1)To each Corner Feature in Corner Feature collection cornersA in present image imgA, it is obtained in next frame
Correspondence position in image imgB, calculates the luminosity gradient Ix and luminosity gradient Iy in Y-direction in X-direction.Computational methods can
To utilize difference approximation, or calculated using operator, such as sobel operators.It is public that specific calculating process belongs to those skilled in the art
General knowledge is known, not in procedure detailed.
2)Under brightness permanence condition, with the Constrained equations of pixel near default window calculation Corner Feature;
3)The above-mentioned default equation group tried to achieve is carried out being converted to matrix M;
4)It is all unusual to judge Metzler matrix obtained above, if nonsingular, is obtained by solving in image imgB
CornersA tracking Corner Feature collection cornersB.
Lucas-Kanade optical flow methods belong to the common knowledge of those skilled in the art, therefore no longer detailed in the present embodiment
Carefully repeat said process.
In actual applications, what is often occurred due to video camera and photograph is image blurring, or video camera
Cause image that in order to attractive in appearance real world images are produced with modification due to carrying video effector, these all can angle steel joint spy
The extraction and tracking levied are impacted.For example, when the serrating phenomenon of lines image generation of oblique line directions, or image are rocked
There is larger grain effect in region that should be flat in a certain two field picture suddenly, may all be extracted as Corner Feature,
Then optical flow tracking module 41 may proceed to be tracked the Corner Feature of these erroneous judgements.This erroneous judgement is most normal with the situation of oblique line
Occur, due to determining to have extracted Corner Feature on oblique line, and the difference very little of each neighborhood territory pixel on oblique line, when taking the photograph
The image information that camera is shot with video camera move when, then these Corner Features are tracked it is possible that Corner Feature
Position without changing or mobile distance is less than the situation of actual range because oblique line just Vertical Square of lines when mobile
It is little to movement, and similitude is too many on lines so that tracking mistake.That is, optical flow tracking module 41 be actually pair
The tracking of pixel in one window, it is impossible to itself judge whether pixel is characterized a little in window.
Therefore, single to have above-mentioned defect using optical flow tracking module 41, the present invention further employs straightening die
Block 42, is corrected for obtaining tracking Corner Feature collection cornersB to above-mentioned optical flow tracking module 41.
Specifically, correction module 42 extracts Corner Feature again on current frame image, obtains new Corner Feature collection, will
The angle point that the tracking Corner Feature obtained by optical flow tracking module 41 concentrates each Corner Feature to be concentrated with new Corner Feature
The Corner Feature that feature is overlapped in the range of preset window is used as the Corner Feature collection after correction.
For example, setting current frame image as imgB, obtained Corner Feature is tracked on image imgB and is integrated as cornersB, corrected
Module 42 extracts Corner Feature on image imgB again, new Corner Feature collection cornersB2 is obtained, if wicket scope is
Cornerscale, this experiment span is the 7*7 windows using Corner Feature center as window center.Corner Feature will be tracked
The Corner Feature in Corner Feature collection cornersB2 is appeared in for most in wicket near each Corner Feature in collection cornersB
The Corner Feature determined eventually, that is to say, that by each Corner Feature in tracking Corner Feature collection cornersB and new Corner Feature
After the Corner Feature that Corner Feature in collection cornersB2 is overlapped in preset window scope cornerscale is as correction
Corner Feature collection, is added in preparation Corner Feature collection cornersC.It should be noted that Corner Feature here is to change into
New Corner Feature collection cornersB2 positional information, Corner Feature label information does not change.
Or, it is the schematic diagram of correction module 42 in Fig. 2 referring to Fig. 3, wherein, it is single that correction module 42 includes the first tracking
Member 421, the second tracking cell 422 and judging unit 423:
If current frame image is image imgA, its Corner Feature integrates as A, and next two field picture is imgB.
Wherein, the first tracking cell 421, for based on Corner Feature collection A, being obtained in next two field picture imgB tracking
Corner Feature collection B.
Second tracking cell 422, for based on Corner Feature collection B, angle point to be obtained in current frame image imgA tracking
Feature set B2.
Judging unit 423, for judge Corner Feature collection A each Corner Feature label whether with Corner Feature collection B2
Each Corner Feature label is identical, if identical, takes label identical Corner Feature to add in preparation Corner Feature collection cornersC,
Prepared for next frame image trace.
Relatively stable tracking Corner Feature information can be obtained to a certain extent by above-mentioned correction module 42, because
Correction module 42 can remove partly because feature substantially or to neighbouring Corner Feature more similar Corner Feature, retain
The Corner Feature for having stronger feature participates in the tracking of next two field picture.
Or, it is another schematic diagram of correction module 42 in Fig. 2 referring to Fig. 4, correction module 42 can also include extracting
Unit 424, predicting unit 425, comparison unit 426:
Wherein, extraction unit 424, the Corner Feature collection B2 for extracting present image;Some angle point is special in B2
Levy to be already present on and track the tracking Corner Feature collection B that the present image is obtained;
Predicting unit 425, the Corner Feature collection B ' for predicting present image according to the Corner Feature in Corner Feature storehouse;
The Corner Feature of comparison unit 426, for contrasting B2, B and B ', B and B will not be appeared in B2 ' is judged as newly
Corner Feature, continues certain frame number according to new angle point rule and judges the confidence level of the Corner Feature and by by the new angle point of threshold value
Feature is added in Corner Feature storehouse.
It should be noted that Corner Feature not all in the new Corner Feature finally obtained is relative to structure
The new Corner Feature in the Corner Feature storehouse of map is built, certain position, the current institute of image collection module 1 are moved to when there is robot
When the image range of acquisition spatially produces overlapping with the image once obtained, the new angle point that above-mentioned comparison unit 426 is obtained
There may be the Corner Feature being already present in Corner Feature storehouse in feature.
Or, correction module 42 can also use pyramid LK algorithms to be tracked;Or, it is special for mobile big angle point
The angle point tracing function after parameter regulation can be used to handle two front and rear frame pictures by levying, increase image triangle
The number of plies, position is moved larger Corner Feature can also judge.After the tracking of this two field pictures has been handled, follow-up is straight
The tracing function used when line mobile image is according to linear motion is calculated.
Memory management module 5, for storing the Corner Feature information that feature tracking module 4 is obtained, and it is special according to the angle point
Reference ceases map making.
Robot localization provided in an embodiment of the present invention based on ceiling image information and map structuring system, due to inciting somebody to action
The direction of video camera is changed to optical axis of the direction of motion perpendicular to video camera of the direction to ceiling, i.e. robot, therefore obtains
Consecutive image change it is more apparent, facilitate map structuring;Further, track characteristics of image when, in order to obtain preferably with
Track information, is corrected to traditional tracking result, the Corner Feature that tracking is obtained more is stablized and accurate, map structuring
Precision is also higher, that is, the map built is also just more accurate.
Described above is only the preferred embodiment of the present invention, it is noted that for the ordinary skill people of the art
For member, without departing from the technical principles of the invention, some improvement and modification can also be made, these improvement and modification
Also it should be regarded as protection scope of the present invention.