CN104252706B - Method and system for detecting specific plane - Google Patents
Method and system for detecting specific plane Download PDFInfo
- Publication number
- CN104252706B CN104252706B CN201310261784.9A CN201310261784A CN104252706B CN 104252706 B CN104252706 B CN 104252706B CN 201310261784 A CN201310261784 A CN 201310261784A CN 104252706 B CN104252706 B CN 104252706B
- Authority
- CN
- China
- Prior art keywords
- width
- map
- inverse
- specific plane
- distance
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Expired - Fee Related
Links
Landscapes
- Image Processing (AREA)
- Image Analysis (AREA)
Abstract
The invention discloses a method and system for detecting a specific plane in stereoscopic vision. The method comprises the following steps that: an inverse perspective mapping image is generated according to at least one of a disparity image and a gray-scale image; a real width distance image is generated according to the disparity image; and the specific plane is detected based on the matching of the inverse perspective mapping image and the real width distance image. According to the method of the invention, the specific plane is detected based on the matching of the inverse perspective mapping image (IPM image) which is generated based on the disparity image and/or the gray-scale image and the real width distance image (RWD image), so that the specific plane can be detected more accurately.
Description
Technical field
The present invention relates to the detection method and system of a kind of specific plane, and more particularly, in stereo visual system
The detection technique of the specific plane on middle detection such as road surface.
Background technology
Nowadays, with the development that deepens constantly of image processing techniquess, people can not be satisfied with only reflects the flat of object
The two dimensional image of facial vision, people have increasingly noticed the third dimension of distance, convex-concave and the depth that can also reflect object
Stereoscopic vision, such as 3D/ 3-D technologies, and study and generate many applications.
There is parallax in the right and left eyes of people, the theoretical basiss of binocular stereo vision have been established in this discovery when object is observed.
Binocular stereo vision be by two video cameras (so-called binocular camera) or video camera of diverse location through mobile or
Rotary taking Same Scene, by various algorithmic match corresponding picture point is gone out, so as to calculating the parallax of the picture point and generating parallax
Figure, is then based on the depth that principle of triangulation recovers the picture point(Distance)Information.The binocular stereo vision calculates regarding for picture point
The calculating and generation of difference figure has been known in the art, and its detailed algorithm and process will not be described here.
The range of application of this stereoscopic vision widely, such as three-dimensional movie, based on the Road Detection of 3-D technology, OK
People's detection, automatic Pilot etc. application.And detect that specific plane also becomes the focus of research in stereoscopic vision.
For example, reliable 3D road environments understand particularly significant for the safety auxiliary/autonomous driving of vehicle, especially right
The urban road of highway, country road is much more complicated than in environment.The identification of 3D driving environments mainly includes:Pavement detection,
Roadside guardrail detection, vanishing Point Detection Method and pedestrian, vehicle detection etc..In the middle of these functions, pavement detection is extremely important
A part, its performance has a great impact to other functions.If can more accurately and efficiently detect for example
Road plane, or other specific planes, then can be more widely applied by these parameters, and such as detection is located at road
Pedestrian, vehicle, building or other objects on road or other specific planes etc..
Known technology also proposes some solutions to the detection of road plane.For example, two correlations are listed below
Document:
Patent documentation 1:The U.S. Patent Application Publication that RABOISSON et al. was announced on January 6th, 1998
No.US5706355A, entitled " Method of analyzing sequences of road images, device for
Implementing it and its application to detecting obstacles ", the patent documentation 1 is by dividing
Analysis monochrome information sets up road forms to extract profile and region, and analysis colouring information combines brightness to determine road surface point
Final road surface is extracted with colouring information.So pavement detection is carried out by brightness and color characteristic, easily by weather, light
According to, the impact of the change such as shade, and obtain detecting inaccurate road plane.
Patent documentation 2:IWASE KOJI et al. Japanese Unexamined Patent Publications disclosed in 8 days October in 2009
No.JP2009230709A, entitled " Detection Apparatus for Vehicle Traveling Road
Surface”.The pavement detection method of patent documentation 2 includes:The parallax point per a line is obtained from stereoscopic camera output image,
Highly minimum point is extracted and carries out subsequent treatment, and based on height range information pavement detection is carried out.It is using height
Feature carries out pavement detection, and its minimum point in selecting per a line is used as road surface point.But in some scenes, road surface point is not total
It is minimum point, the such as road surface on overpass, there is road of irrigation canals and ditches etc. both sides, and obtain detecting inaccurate road plane.
The content of the invention
According to an aspect of the present invention, there is provided it is a kind of in stereoscopic vision detect specific plane method, including:According to
At least one of disparity map and gray-scale maps are converted to generate against saturating(Inverse Perspective Mapping,IPM)Figure;
Width-distance is generated according to the disparity map(Real Width Distance, RWD)Figure;And based on the inverse conversion thoroughly
Scheme the matching with the width-distance map to detect the specific plane.
According to a further aspect in the invention, there is provided it is a kind of in stereoscopic vision detect specific plane system, including:It is inverse saturating
Transformation Graphs generating means, are configured to generate inverse saturating Transformation Graphs according at least one of disparity map and gray-scale maps;Width-away from
From figure generating means, it is configured to generate the device of width-distance map according to the disparity map;And specific plane detection dress
Put, be configured to the matching based on the inverse saturating Transformation Graphs and the width-distance map to detect the specific plane.
Each embodiment of the invention, can adopt the plane characteristic of such as road plane, carry out IPM figures and RWD
The matching of figure is detecting the road plane.With those microscopic features(For example, color, edge, texture etc.)Compare, the plane of road
A gross feature is characterized in that, therefore, each embodiment of the present invention has stronger robustness to environmental change.At the same time,
In the disclosure, IPM figures and RWD figures can in no particular order while generate, therefore parallel processing can be carried out, therefore the present invention
Each embodiment also have stronger real-time.
Description of the drawings
Fig. 1 shows according to an embodiment of the invention for the method for detection specific plane in stereoscopic vision
Schematic flow diagram.
Fig. 2 schematically illustrates the schematic diagram that IPM figures are generated based on gray-scale maps or disparity map.
Fig. 3 schematically illustrates the schematic diagram that RWD figures are generated based on disparity map.
Fig. 4 A-4E schematically illustrate the process of the matching of the inverse saturating Transformation Graphs and the width-distance map.
Fig. 5 schematically illustrates the effect of the method for detecting specific plane according to an embodiment of the invention and tradition side
The comparison of the effect of method.
Fig. 6 shows the system for detecting specific plane in stereoscopic vision according to another embodiment of the invention
Schematic block diagram.
Specific embodiment
The specific embodiment of the present invention is reference will now be made in detail to now, in the accompanying drawings exemplified with the example of the present invention.Although will knot
The specific embodiment description present invention is closed, it will be understood that, it is not intended to limit the invention to described embodiment.On the contrary, it is desirable to cover
The change for including within the spirit and scope of the present invention, modification and equivalent that lid is defined by the following claims.It should be noted that this
In the method and step that describes can be by any functional device or function arrangement realizing, and any functional device or function arrangement can quilts
It is embodied as physical entity or logic entity or a combination of both.
In order that those skilled in the art more fully understand the present invention, with reference to the accompanying drawings and detailed description to this
It is bright to be described in further detail.
Fig. 1 shows the method for detecting specific plane in stereoscopic vision according to an embodiment of the invention
100 schematic flow diagram.
Should detect that the method 100 of specific plane included in stereoscopic vision:Step S101, according in disparity map and gray-scale maps
At least one generating inverse saturating Transformation Graphs(IPM schemes);Step S102, width-distance map is generated according to the disparity map
(RWD schemes);And step S103, based on the inverse saturating Transformation Graphs(IPM schemes)With the width-distance map(RWD schemes)Matching,
To detect the specific plane.
By based on the described inverse saturating Transformation Graphs generated from disparity map and/or gray-scale maps(IPM schemes)With the width-distance
Figure(RWD schemes)Matching detecting the specific plane, the specific plane can be more accurately detected.
In one embodiment, the object that the inverse IPM figures of conversion thoroughly can be used for describing in the disparity map is in the world
Figure in coordinate system on the actual plane related to the specific plane is represented.And the width-can use apart from RWD figures
Relation of the object in the description disparity map on width and distance.Note, " width " mentioned here can be with reality
Width is related, and it can be developed width, it is also possible to proportional to developed width.In the disclosure, exemplarily using reality
Width come generate RWD figure.
In one embodiment, the inverse IPM figures of conversion thoroughly can be by by the object in disparity map from image coordinate
System be remapped to world coordinate system, by the v axles and u principal axis transformations of world coordinate system be etc. resolution and take world coordinate system
Z coordinate be a particular value related to the specific plane(It is in this instance 0)Obtained from plane graph.In general, generation
The v axles of boundary's coordinate system represent distance(Or depth), u axles represent width, and z-axis represents height.In order to remove in video camera imaging mistake
Perspective transform effect in journey, the geometric transformation of so-called inverse perspective mapping IPM figures is used to carry out gray-scale maps or disparity map weight
New mappings, to obtain a kind of equally distributed top view in level and longitudinal direction.The IPM map generalizations are known, Ke Yicong
Such as Yuan Qi equalitys people is published in the paper " image mosaic based on inverse perspective projection transformation of microcomputer information 21 phases in 2010
Method " and entitled " Implementation of inverse perspective mapping algorithm for
The development of an automatic lane tracking system ", author is Muad, Anuar
Mikdad;Hussain,Aini;Samad,S.A.;Mustaffa,Mohd Marzuki;Majlis,Burhanuddin
Yeop.TENCON2004,2004IEEE Region10Conference, Page (s):Obtain in 207-210Vol.1..Here
These papers are herein incorporated, and do not repeat the concrete generating mode of IPM figures.
Here, the IPM map generalizations are typically carried out by the inner parameter and external parameter of disparity map and video camera.It is logical
Often, the inner parameter of video camera is obtained by off-line calibration(For example, the parallax range of binocular camera, optical characteristics etc.),
Simultaneously the installation site according to video camera on vehicle, determines the external parameter of video camera(For example, video camera is relative to road surface
Angle, position etc.).
Note, in common IPM map generalizations(For example described in the above-mentioned paper being incorporated to)In, it may be necessary to by parallax
Object in figure is remapped to after world coordinate system from image coordinate system, be by the v axles and u principal axis transformations of world coordinate system etc.
Resolution, the z coordinate that world coordinate system is then taken again is a particular value related to the specific plane(It is in this instance 0)
To obtain IPM figures, this is because when directly the object in disparity map being remapped to into world coordinate system from image coordinate system,
On the v direction of principal axis of world coordinate system, the actual range represented by unit pixel is different.In above-mentioned paper, for example, due to v
On direction of principal axis, the actual range represented by unit pixel is different, therefore tries to achieve the unit pixel on v axles by the point on u axles
With the ratio value of actual range(For example, a pair of parallel point in u axles is arbitrarily chosen, according to the relation formula of formula u-v axle(On seeing
Formula in review text(7))Them are obtained in world coordinate system along the actual range difference △ x of range direction, then give it
Distribute corresponding pixel count N in target image, then can obtain ratio N/ △ x).Thus, by the unit pixel and actual range
Ratio value, it is all equal proportion that the world coordinate system that obtains of can remapping is transformed to u, v axle relative to pixel, so as to more
The actual size of object is reduced well.
In the above-described embodiments, the inverse IPM figures of conversion thoroughly of generation enable to the specific plane(Such as road surface)
A related plane(Such as plane of world coordinate system z coordinate=0)On point do not deform, and not specific flat with described
Face(Such as road surface)Point in a related plane may deform(In this, it is assumed that the plane of world coordinate system z coordinate=0
(That is, related to a specific plane plane)The specific plane on such as road surface can be simulated).Here, should be with the specific plane
(Such as road surface)A related plane is exactly not necessarily the specific plane itself, but it is desirable to a plane of the correlation to the greatest extent can may be used
Can ground close specific plane, such as road surface, so enable to detect the specific plane, such as road surface effect it is more accurate.
Fig. 2 illustrates based on gray-scale maps and disparity map to generate the result of the inverse IPM figures of conversion thoroughly.With reference to Fig. 2, from
The IPM figures obtained based on disparity map on the right side of Fig. 2 can be seen that the point in such as plane of world coordinate system z coordinate=0
(The point of such as lane line on road surface)Do not deform, and the point in the plane of z coordinate=0 may not deform(Ginseng
Examine guardrail beside the vehicle and road of upper pavement surface etc.).Scheme more from the IPM obtained based on gray-scale maps in the left side of Fig. 2
As can be seen that being located on such as road surface(Non- upper pavement surface or lower section)The shape facility of lane line kept, and not position
On road surface(For example, positioned at upper pavement surface or lower section)The shape facility of other objects all there occurs deformation.
Note, in actual applications, IPM figures can be generated by gray-scale maps or disparity map(With reference to the opinion of above-mentioned merging
The step of generation IPM schemes in text)But, in order that with following width-distance maps(RWD schemes)Matching it is more accurate, can be with
All carried out using same disparity map when scheming IPM figures and RWD is generated.Certainly, the disclosure is also not necessarily limited to this, according to this area skill
Art personnel to IPM scheme and RWD map generalization principles understanding, it is also possible to generated using other characteristics of image and parameter similar to
IPM schemes other figures with RWD figures, here, the inverse Transformation Graphs that this other figures are also included in being mentioned in the disclosure(Or
IPM schemes)With width-distance map(Or RWD figures)In the range of.
In one embodiment, the width-distance map(RWD schemes)Can be by the way that the disparity map is remapped to
With width be a coordinate axess and with distance as obtained from the two-dimensional field of another coordinate axess plane graph, wherein, the width-away from
Two coordinate axess in figure figure such as are respectively at the resolution.
In disparity map, the parallax value of each point represents depth information.Based on X-Y and depth information, can be to parallax
Figure enters line translation to obtain actual width-distance(RWD)Figure.RWD figures are also a kind of top view, in the RWD figures level with it is vertical
(X- width, Y- distances)Coordinate all has respectively equal resolution.For example, in RWD figures, 2 meters of wide cars, away from
There are 20 pixel wides at for 10 meters, then another car 2 meters wide, be at 50 meters, still with 20 pictures in distance
It is plain wide.
In one embodiment, the inverse saturating Transformation Graphs and/or width-distance map can be obtained by disparity map, shooting
The inner parameter and external parameter of the video camera of the disparity map is obtaining.
That is, by disparity map and the inside and outside parameter of video camera, the developed width and thing of such as object can be obtained
The distance of body and video camera, so as to using the developed width and distance as orthogonal two coordinate axess on the two-dimensional field, can be with
These objects are remapped to into the width-distance map(RWD schemes)In.Here, due to the developed width and video camera of object
Distance such as is for each pixel at the resolution, therefore the width-distance map for obtaining that remaps(RWD schemes)Need not
Conversion as carrying out etc. resolution generating when IPM schemes.
Fig. 3 schematically illustrates the schematic diagram that RWD figures are generated based on disparity map.From figure 3, it can be seen that being based on parallax
The RWD figures that figure is generated can be shown at lane line on road surface, vehicle, the guardrail on road surface both sides, etc. each object shape
Shape feature(Wherein, there is no excessive deformation in these shape facilities).
Can be seen that for the arbitrfary point in original disparity map from above-mentioned IPM figures and RWD map generalization processes, it is by root
It is mapped in IPM figures according to inverse perspective mapping, while it will be mapped in RWD figures according to its depth and width information again.In IPM figures
In, X- levels are tieed up and Y- vertical dimensions(Width is tieed up and distance dimension)The resolution such as all it is respectively on direction.At the same time, exist
The resolution such as also it is respectively in the two directions in RWD figures.The level of two figures can be real with the unit of vertical coordinate
The physical unit on border:Rice(Or other unit).Therefore, for the arbitrfary point in the plane of road surface, its IPM scheme and RWD figures in all
There is approximately uniform coordinate(X- width, Y- distances, unit:Rice(Or other unit)).At the same time, for any non-road
Point on facial plane, it is schemed in IPM and has different coordinates in RWD figures.Therefore, can be passed through according to the method for the present embodiment
Carry out above-mentioned IPM figure and RWD figures in following steps to match to determine the point on the point on road surface and non-road surface, to be relatively defined
Really detect the specific plane of the such as road surface.
For example, the part that the bottom of vehicle contacts with road surface, these points are also in the plane of road surface, and they scheme in IPM
With in RWD figures have approximately uniform coordinate;At the same time, other points on vehicle, i.e. those above the plane of road surface
Point, schemes in IPM and has different coordinate figures in RWD figures.Because those points of vehicle rear, no matter in top or bottom,
Distance away from camera is all identical, so in RWD figures, they have identical vertical coordinate(Y-coordinate, depth, distance side
To).And in IPM figures, the point of vehicle upper, i.e., those points above the plane of road surface will occur abnormal when IPM figures are generated
Become.Therefore road surface Dian Yufei road surfaces point can be determined with RWD figures by Match IP M figure.
Therefore, the inverse IPM of conversion thoroughly figures and the width-after RWD figures are being generated, can be by observing the IPM
Figure detects specific plane with the characteristics of image of RWD figures.
It is exemplified below(And it is unrestricted)By a kind of way of example that specific plane is detected by the IPM figures and RWD figures:
Specifically, in one embodiment, it is described based on the inverse saturating Transformation Graphs and the width-distance shown in Fig. 1
The step of matching of figure is to detect specific plane S103 can include:S1031(Not shown in figure), for the gray scale
Pixel on figure and at least one of disparity map, searches the inverse Transformation Graphs that are re-mapped in the pixel and described
Two coordinates of both width-distance maps;S1032(Not shown in figure)The distance of the described two coordinates of comparison, if the distance
Less than predetermined threshold, it is determined that the pixel is matching on the inverse saturating Transformation Graphs and width-distance map, used as the spy
Allocate the point on face;And S1033(Not shown in figure), obtain described specific flat according to each point on the specific plane
Face.
Specifically, due to the external parameter of video camera(That is parameter of the video camera relative to road surface)It is off-line calibration, and
In vehicle travel process, road surface has the changes such as fluctuating, inclination, i.e., the external parameter of the video camera previously demarcated may occur
Change.But carry out being generally adopted by the external parameter of previous video camera when IPM is converted, scheme to be matched with RWD figures in IPM
When, the coordinate incomplete that the coordinate of the road surface point on the IPM figures of generation may be mapped with the same road surface point in RWD figures
Cause, accordingly, it may be desirable in the matching of certain enterprising walking along the street cake of degree of accuracy, that is, not necessarily require same road surface point or other
Pixel has identical coordinate or identical characteristics of image in IPM and RWD figures(For example, gray scale, colourity
Deng), and a certain degree of permissible range can be allowed to carry out the matching of road surface point.Match each pixel on two figures
Matching way there are many known modes, such as the image matching algorithm, other various images match in moving image is calculated
Method etc..It is as an example and unrestricted that two matching ways are enumerated below:
A kind of matching process is that, based on the method for local message, another kind of matching process is based on the method for global information.
Matching process based on local message can choose rectangle or trapezoid area centered on the pixel to be matched
(Or the region of other shapes)For scanning window, i.e. not only consider central point, it is also contemplated that other in its rectangle or trapezoidal neighborhood
The distribution of point.In matching, can be on the basis of a pixel in RWD figures(Can also be on the basis of IPM figures), in IPM figures
(It is corresponding to scheme for RWD)In the correspondence position of the pixel sweep around the matching of window.If this in IPM figures is swept
Retouch and find in window a pixel, its such as gray value is little with the difference of the gray value of the pixel to be matched in RWD figures
In certain threshold value, then it is considered that two pixels are matchings, and can be determined that it is all the road surface point on road surface.If not
Mix, then determine that it is non-road surface point.
Matching process based on global information detects the road surface such as lane line or road edge in IPM figures with RWD figures respectively
Mark, then schemes in IPM and the marks such as detected lane line is matched in RWD figures(For example by public affairs such as images match
Know mode).If matching, judge that these marks are the target on road surface.If not matching, non-road surface mesh is determined that it is
Mark.Compared with the method based on local message, the robustness based on the matching process of global information is higher.
Therefore, in another embodiment, it is described based on the inverse saturating Transformation Graphs and the width-distance shown in Fig. 1
The step of matching of figure is to detect specific plane S103 can include:With in inverse saturating Transformation Graphs and the width-distance map
A figure in the first pixel on the basis of, in another figure in the inverse saturating Transformation Graphs and the width-distance map,
Predetermined threshold is less than with searching in first pixel predetermined areas apart with the difference of the feature of first pixel
Second pixel of value, as with the pixel of first pixel matching and as the point on the specific plane;And according to
Each on the specific plane is put to obtain the specific plane.
The detailed process of the matching of the inverse saturating Transformation Graphs and the width-distance map is illustrated with reference to Fig. 4 A-4E.
In Figure 4 A, the IPM figures and RWD figures of the example generated by the example disparity map on Fig. 4 A tops are obtained(In order to straight
See and readily appreciate, these figures are representatively shown with lines).As can be seen that in such as road surface from the IPM figures of Fig. 4 A
Specific plane on object(Such as lane line on road surface, and the bottom that vehicle is contacted with road surface)Shape facility(For example,
Edge, size etc.)It is kept and not other objects on the specific plane of such as road surface(For example, vehicle except with road
Part beyond the bottom of face contact)Shape facility deformation.And in the RWD figures of Fig. 4 A, essentially show each object
(Including lane line and vehicle on road surface)True form feature(Because RWD figures are produced with developed width and distance,
Therefore RWD figures are consistent with true form feature).
Note, because IPM figures and RWD figures are typically all to shoot the left and right view that obtains and regarding for generating by binocular camera
Differ from figure to generate, therefore cannot typically reduce the original shape of object(Because, only from from the point of view of video camera to object
Front).Therefore, the vehicle in the IPM figures and RWD figures in Fig. 4 A is not the actual original shape of vehicle, and in IPM figures,
Due to the other parts in addition to the part contacted with ground of vehicle being regarded as than the part contacted with ground more
Remote part, therefore when IPM figures are generated, it is believed that the distance represented by unit pixel in the other parts is more remote, therefore car
The other parts seem bigger.And the width and distance in RWD figures, due to only make use of the object seen in disparity map
Information, therefore do not show from from the point of view of video camera in RWD figures yet and go to can't see the part behind vehicle.
Then, in figure 4b, can be with the first pixel in a figure in inverse saturating Transformation Graphs and the width-distance map
On the basis of, in another figure in the inverse saturating Transformation Graphs and the width-distance map, with first pixel apart
The second pixel less than predetermined threshold with the difference of the feature of first pixel is searched in one predetermined areas, as with institute
State the pixel of the first pixel matching.For example, as shown in Figure 4 B, for that pixel of the vehicle bottom in disparity map, with
On the basis of the respective pixel point of the vehicle bottom in IPM figures, search in such as RWD figures and that picture in the disparity map
The corresponding respective pixel point of vegetarian refreshments.For example, if the seat of the respective point in the respective point and IPM figures in Fig. 4 B shown in RWD figures
Target distance is less than predetermined threshold(The distance is located at such as 15 × 15 pixel coverages(Or other shapes, the scope of other sizes)
It is interior), then can using the respective point in the RWD figures as the matching of the respective point in IPM figures pixel, so as to confirm the RWD
Point in the point and IPM figures in figure is the point being located on specific plane.
In figure 4 c, similarly, for that pixel on the lane line in disparity map, with the track in IPM figures
On the basis of respective pixel point on line, respective pixel corresponding with that pixel in the disparity map is searched in RWD figures
Point.For example, if the distance of the coordinate of the respective point in the respective point and IPM figures on the lane line shown in the RWD figures of Fig. 4 B
Less than predetermined threshold(The distance is located at such as 15 × 15 pixel coverages(Or other shapes, the scope of other sizes)It is interior), then may be used
Using by the RWD figures this as the point in IPM figures matching pixel, so as to confirm the RWD figures in the point with
And the point in IPM figures is the point being located on specific plane.
For example, as shown in Figure 4 C, for that pixel in the vehicle body in disparity map, with the vehicle body of IPM figures
On the basis of respective point on body, respective pixel point corresponding with that pixel in the disparity map is searched in RWD figures.But
It is, if the distance of the coordinate of the respective point in the respective point in the vehicle body shown in the RWD figures of Fig. 4 C and IPM figures is more than
Predetermined threshold(The distance is located at such as 15 × 15 pixel coverages(Or other shapes, the scope of other sizes)In addition), then can be with
Think that the pixel in the disparity map is mismatched in RWD figures and IPM figures, so as to confirm the RWD figures in the point and IPM
The point in figure is not located at the point on specific plane.See the top of Fig. 4 C, the body of vehicle(It is not the portion contacted with road surface
Point)On that point be mapped in IPM figures and be mapped in RWD figures, obtain two respective pixel points at different coordinate positions.
Therefore, this unmatched pixel in IPM figures and RWD figures may be considered that and be not located on specific plane.
Therefore, by above-mentioned example process, repeatedly IPM figures and all pixels point in RWD figures all can be carried out
The process matched somebody with somebody, eventually finds each pixel for matching in both figures as each point on specific plane.Then, according to this
Each on the specific plane for finding is put to obtain the specific plane.Obtained according to each point on the specific plane that this finds
The mode of the specific plane there may be many known modes, such as by plane fitting or to each point equalization etc.
Mode, here is not limited and does not repeat yet.
The bottom of Fig. 4 E is shown in an experiment through according to resulting after the above-mentioned matching process of the present embodiment
The lane line on road surface that each pixel matched somebody with somebody is constituted, the top of Fig. 4 E is shown based on above-mentioned according to the present embodiment
The road surface that plane fitting is obtained is carried out to the pixel of matching with after process, and carries out the result of vehicle detection.Can be from Fig. 4 E
In find out, by way of the detection specific plane of the present embodiment, the specific flat of the such as road surface can be more accurately detected
Face, and can based on this more accurately road surface more accurately detecting vehicles or pedestrians on road surface etc.(Further below will be with reference to Fig. 5
Further to illustrate the accuracy of the detection specific plane compared to traditional method).
In one embodiment, the method 100 of the detection specific plane can also include:S104(Not shown in figure), root
According to the inverse saturating Transformation Graphs and the matching result of the width-distance map, carry out the correcting captured video camera for obtaining the disparity map
External parameter.
In one embodiment, the matching result by according to the inverse saturating Transformation Graphs and the width-distance map,
The step of external parameter of the video camera for carrying out the correcting captured stereogram, S104 can include:According to the inverse conversion thoroughly
The matching result of figure and the width-distance map, obtains the pixel for matching in the inverse saturating Transformation Graphs and the width-distance
Grid deviation on figure;According to the grid deviation, the external parameter of the correcting captured video camera for obtaining the disparity map.
Specifically, as noted previously, as the external parameter of video camera(That is parameter of the video camera relative to road surface)It is offline
Demarcate, and in vehicle travel process, road surface has the changes such as fluctuating, inclination, i.e., the outside of the video camera previously demarcated is joined
Number may have occurred change.Therefore, found after IPM schemes road surface corresponding with RWD figures point by matching algorithm, can be with
On the basis of RWD figures, with the deviation of the coordinate of the road surface point for matching come the external parameter of correcting camera, for example, it is known that sit
Target deviation, then adjust the external parameters such as the angle relative to road surface, the position of video camera to compensate the deviation of the coordinate, from
And, when next frame carries out IPM conversion, the IPM conversion is carried out using this corrected external parameter.Thus, can obtain
More accurately IPM schemes, so as to the detection for more accurately carrying out specific plane.And the side of the detection specific plane according to the present embodiment
Method can in real time self renewal, improve specific plane detection accuracy.
Fig. 5 schematically illustrates the effect of the method for detecting specific plane according to an embodiment of the invention and tradition side
The comparison of the effect of method.
The topmost of Fig. 5 shows that carrying out this compares adopted example gray-scale maps and/or disparity map.Then, except
In 6 little figures beyond the topmost of Fig. 5,3 figures in left side are the inspections of the conventional pav detection method based on color and edge
Result is surveyed, 3 figures on right side are the testing results of the method for the detection specific plane that embodiments in accordance with the present invention are proposed.Can be with
See, curb stone is mistakenly detected as road surface by traditional method, and according to an embodiment of the invention method is accurately detected
Road surface point on road surface.It can further be seen that because curb stone is mistakenly detected as road surface, therefore its lane line by traditional method
Detection is also affected, and left-hand lane line there occurs error detection.And according to an embodiment of the invention method is accurately examined
The lane line on road surface is measured.The results show accuracy of method and effectively according to an embodiment of the invention
Property.
Fig. 6 shows the system for detecting specific plane in stereoscopic vision according to another embodiment of the invention
600 schematic block diagram.
Should detect that the system 600 of specific plane included in stereoscopic vision:Inverse saturating Transformation Graphs generating means 601, are configured
It is inverse saturating Transformation Graphs generating means, is configured to generate inverse saturating Transformation Graphs according at least one of disparity map and gray-scale maps;
Width-distance map generating means 602, is configured to generate width-distance map according to the disparity map;And specific plane inspection
Device 603 is surveyed, it is described specific flat to detect to be configured to the matching based on the inverse saturating Transformation Graphs and the width-distance map
Face.
By based on the described inverse saturating Transformation Graphs generated from disparity map and/or gray-scale maps(IPM schemes)With the width-distance
Figure(RWD schemes)Matching detecting the specific plane, the specific plane can be more accurately detected.
In one embodiment, the object that the inverse saturating Transformation Graphs can be used for describing in the disparity map is in world coordinates
Figure in system on the actual plane related to the specific plane is represented, and the width-distance map is used to describe described
Relation of the object in disparity map on width and distance.
In one embodiment, the inverse saturating Transformation Graphs can be by by the object in disparity map from image coordinate system weight
Z that is that new mappings the resolution such as are to world coordinate system, by the v axles and u principal axis transformations of world coordinate system and taking world coordinate system sits
It is designated as plane graph obtained from a particular value related to the specific plane.
In one embodiment, the width-distance map can be by the way that the disparity map is remapped to width
For a coordinate axess and with distance as obtained from the two-dimensional field of another coordinate axess plane graph, wherein, the width-distance map figure
In two coordinate axess the resolution such as be respectively.
In one embodiment, the inverse saturating Transformation Graphs and/or width-distance map obtain described by disparity map, shooting
The inner parameter and external parameter of the video camera of disparity map is obtaining.
In one embodiment, the specific plane detection means 603 can include:Device 6031(Not shown in figure), quilt
It is configured to for the pixel at least one of the gray-scale maps and disparity map, lookup is re-mapped in the pixel
Inverse Transformation Graphs and both width-distance maps two coordinates;Device 6032(Not shown in figure), it is configured to compare
The distance of described two coordinates, if the distance is less than predetermined threshold, it is determined that the pixel is in the inverse saturating Transformation Graphs and width
It is matching on degree-distance map, as the point on the specific plane;And device 6033(Not shown in figure), it is configured to
The specific plane is obtained according to each point on the specific plane.
In one embodiment, the specific plane detection means 603 can include:Device 6034(Not shown in figure), quilt
It is configured on the basis of the first pixel in a figure in inverse saturating Transformation Graphs and the width-distance map, in the inverse change thoroughly
In another figure changed in figure and the width-distance map, with first pixel predetermined areas apart in look into
The second pixel that predetermined threshold is less than with the difference of the feature of first pixel is looked for, as the picture with first pixel matching
Vegetarian refreshments and as the point on the specific plane;And device 6035(Not shown in figure), it is configured to according to described specific flat
Each on face is put to obtain the specific plane.
In one embodiment, the system 600 can also include:Device 604(Not shown in figure), it is configured to according to institute
The matching result of inverse saturating Transformation Graphs and the width-distance map is stated, carrys out the outer of the correcting captured video camera for obtaining the disparity map
Portion's parameter.
In one embodiment, the device 604 includes:Device 6041(Not shown in figure), it is configured to according to described inverse
Thoroughly the matching result of Transformation Graphs and the width-distance map, obtains the pixel for matching in the inverse saturating Transformation Graphs and the width
Grid deviation on degree-distance map;And device 6042(Not shown in figure), it is configured to according to the grid deviation, correction
Shooting obtains the external parameter of the video camera of the disparity map.
So that when next frame carries out IPM conversion, the IPM conversion is carried out using this corrected external parameter.Such as
This, can obtain more accurately IPM figures, so as to the detection for more accurately carrying out specific plane.And according to the detection of the present embodiment
The method of specific plane can in real time self renewal, improve specific plane detection accuracy.
The device that is related in the disclosure, device, equipment, the block diagram of system only illustratively the example of property and are not intended to
The mode that requirement or hint must be illustrated according to square frame is attached, arranges, configures.As it would be recognized by those skilled in the art that
, can be connected, be arranged by any-mode, configure these devices, device, equipment, system.Such as " including ", "comprising", " tool
Have " etc. word be open vocabulary, refer to " including but not limited to ", and can be with its used interchangeably.Vocabulary used herein above
"or" and " and " refer to vocabulary "and/or", and can be with its used interchangeably, unless it be not such that context is explicitly indicated.Here made
Vocabulary " such as " refers to phrase " such as, but not limited to ", and can be with its used interchangeably.
Step flow chart in the disclosure and above method description only illustratively the example of property and are not intended to require
Or hint according to the order for being given the step of must carry out each embodiment.As the skilled person will recognize, can be with
The order of the step in above example is carried out in any order.Such as " thereafter ", " and then ", the word of " following " etc. no
It is intended to the order of conditioning step;These words are only used for the description for guiding reader to read over these methods.Additionally, for example using article
" one ", " one " or " being somebody's turn to do " are not construed as the key element being limited to odd number for any reference of the key element of odd number.
The above description of disclosed aspect is provided so that any person skilled in the art can make or using this
Invention.Various modifications in terms of these are readily apparent to those skilled in the art, and here is defined
General Principle can apply to other aspect without deviating from the scope of the present invention.Therefore, the present invention is not intended to be limited to
Aspect shown in this, but according to the widest range consistent with the feature of principle disclosed herein and novelty.
In order to purpose of illustration and description has been presented for above description.Additionally, this description is not intended to the reality of the present invention
Apply example and be restricted to form disclosed herein.Although already discussed above multiple exemplary aspects and embodiment, this area skill
Art personnel will be recognized that its some modification, modification, change, addition and sub-portfolio.
Each operation of the process described above can be by can carry out any appropriate means of corresponding function
Carry out.The means can include various hardware and/or component software and/or module, including but not limited to circuit, special integrated electricity
Road(ASIC)Or processor.
Can utilize and be designed to carry out general processor, the digital signal processor of function described herein(DSP)、
ASIC, field programmable gate array signal(FPGA)Or other PLDs(PLD), discrete gate or transistor logic, from
Scattered nextport hardware component NextPort or its combination in any and realize or carry out each described logical block, module and circuit for illustrating.It is general
Processor can be microprocessor, but used as replacing, the processor can be any commercially available processor, control
Device, microcontroller or state machine.Processor is also implemented as the combination of computing device, the combination of such as DSP and microprocessor,
One or more microprocessors or any other such configuration that multi-microprocessor cooperates with DSP core.
With reference to disclosure description method or can be directly embedded into the step of algorithm within hardware, the software of computing device
In module or in the combination of both.Software module may reside in any type of tangible media.Can use
Storage medium some examples include random access memory(RAM), read only memory(ROM), flash memory, EPROM
Memorizer, eeprom memory, depositor, hard disc, removable dish, CD-ROM etc..Storage medium can be couple to processor with
Just the processor can be from the read information and to the storage medium write information.In substitute mode, storage is situated between
Matter can be overall with processor.Software module can be single instruction or many instructions, and can be distributed in several
Between program on different code segments, different and across multiple storage mediums.
Method disclosed herein includes one or more actions for realizing described method.Method and/or action can
With the scope without deviating from claim interchangeable with one another.In other words, unless specified the particular order of action, otherwise can repair
Change the order of concrete action and/or using the scope without deviating from claim.
Described function can be realized by hardware, software, firmware or its combination in any.If implemented in software, function
Can be stored on practical computer-readable medium as one or more instructions.Storage medium can be by computer
Any available tangible media for accessing.By example rather than restriction, such computer-readable medium can include RAM,
ROM, EEPROM, CD-ROM or other laser disc storages, magnetic disc storage or other magnetic memory devices can be used for carrying or deposit
The desired program code of storage instruction or data structure form and any other tangible media that can be accessed by computer.Such as
It is as used herein, dish(disk)And disk(disc)Including compact disk(CD), laser disk, CD, digital universal disc(DVD), soft dish
And Blu-ray disc, wherein dish usual magnetic ground reproduce data, and disk is using laser optics ground reproduce data.
Therefore, computer program can carry out operation given herein.For example, such computer program can
Being with tangible storage(And/or coding)The computer-readable tangible medium of instruction thereon, the instruction can be by one
Or multiple computing devices are carrying out operation described herein.Computer program can include the material of packaging.
Software or instruction can also be transmitted by transmission medium.It is, for example possible to use such as coaxial cable, optical fiber light
Cable, twisted-pair feeder, digital subscriber line(DSL)Or the transmission medium of the wireless technology of such as infrared, radio or microwave is from website, clothes
Business device or other remote source softwares.
Additionally, the module and/or other appropriate means for carrying out methods and techniques described herein can be appropriate
When downloaded by user terminal and/or base station and/or other modes are obtained.For example, such equipment can be couple to server with
Promote the transmission for carrying out the means of method described herein.Or, various methods described herein can be via storage part
Part(The physical storage medium of such as RAM, ROM, CD or soft dish etc.)There is provided, so that user terminal and/or base station can be
It is couple to the equipment or obtains various methods when providing memory unit to the equipment.Furthermore, it is possible to using for will this institute
The methods and techniques stated are supplied to any other appropriate technology of equipment.
Other examples and implementation are in the scope of the disclosure and the accompanying claims and spirit.For example, due to software
Essence, functionality described above can use by processor, hardware, firmware, hardwired or these arbitrary combination perform
Software realize.Realizing the feature of function can also be physically located in each position, exist including the part being distributed so as to function
Different physical locations are realized.And, as used herein, used in being included in claim, with " at least one "
"or" used in the enumerating of the item of beginning indicate it is detached enumerate, enumerate meaning so as to such as " at least one of A, B or C "
A or B or C, or AB or AC or BC, or ABC(That is A and B and C).Additionally, wording " example " does not mean that the example of description is
It is preferred or more preferable than other examples.
Can carry out to the various of technology described herein without departing from the technology instructed defined by the appended claims
Change, replace and change.Additionally, the disclosure and claim scope be not limited to process described above, machine, manufacture,
The specific aspect of the composition of event, means, method and action.Can utilize carry out to corresponding aspect described herein it is essentially identical
Function either realize the there is currently of essentially identical result or process, machine, manufacture, the event after a while to be developed
Composition, means, method or action.Thus, claims are included in such process, machine, manufacture, thing in the range of it
The composition of part, means, method or action.
Claims (9)
1. it is a kind of in stereoscopic vision detect specific plane method, including:
Inverse saturating Transformation Graphs are generated according at least one of disparity map and gray-scale maps;
Width-distance map is generated according to the disparity map;And
The specific plane is detected based on the matching against saturating Transformation Graphs and the width-distance map,
Wherein, the object that the inverse saturating Transformation Graphs are used to describing in the disparity map is in world coordinate system specific flat with described
Figure on the related actual plane in face represents, and the object that the width-distance map is used to describing in the disparity map is in width
Spend and apart from upper relation.
2. method according to claim 1, wherein, the inverse saturating Transformation Graphs be by by the object in disparity map from image
Coordinate system is remapped to world coordinate system, the depth axle and width axes of world coordinate system be transformed to etc. resolution and take
The height axial coordinate of world coordinate system is plane graph obtained from a particular value related to the specific plane.
3. method according to claim 1, wherein, the width-distance map is by the way that the disparity map is remapped
To with width be a coordinate axess and with distance as obtained from the two-dimensional field of another coordinate axess plane graph, wherein, the width-
Two coordinate axess in distance map figure such as are respectively at the resolution.
4. method according to claim 1, wherein, the inverse saturating Transformation Graphs and/or width-distance map by disparity map,
Shooting obtains the inner parameter and external parameter of the video camera of the disparity map obtaining.
5. method according to claim 1, wherein, it is described based on the inverse saturating Transformation Graphs and the width-distance map
The step of matching is to detect the specific plane includes:
For the pixel at least one of the gray-scale maps and disparity map, what lookup was re-mapped in the pixel
Two coordinates of inverse both saturating Transformation Graphs and the width-distance map;
The distance of the described two coordinates of comparison, if the distance is less than predetermined threshold, it is determined that the pixel is in the inverse change thoroughly
It is matching to change on figure and width-distance map, used as the point on the specific plane;And
The specific plane is obtained according to each point on the specific plane.
6. method according to claim 1, wherein, it is described based on the inverse saturating Transformation Graphs and the width-distance map
The step of matching is to detect the specific plane includes:
On the basis of the first pixel in a figure in inverse saturating Transformation Graphs and the width-distance map, in the inverse conversion thoroughly
In another figure in figure and the width-distance map, with lookup in first pixel predetermined areas apart
The second pixel of predetermined threshold is less than with the difference of the feature of first pixel, as the pixel with first pixel matching
Put and as the point on the specific plane;And
The specific plane is obtained according to each point on the specific plane.
7. method according to claim 1, also includes:
According to the inverse saturating Transformation Graphs and the matching result of the width-distance map, correcting captured to obtain the disparity map
The external parameter of video camera.
8. method according to claim 7, wherein, it is described according to the inverse saturating Transformation Graphs and the width-distance map
The step of matching result, external parameter of the video camera for carrying out the correcting captured disparity map, includes:
According to the inverse saturating Transformation Graphs and the matching result of the width-distance map, the pixel for matching is obtained described inverse saturating
Grid deviation on Transformation Graphs and the width-distance map;And
According to the grid deviation, the external parameter of the correcting captured video camera for obtaining the disparity map.
9. it is a kind of in stereoscopic vision detect specific plane system, including:
Inverse saturating Transformation Graphs generating means, are configured to inverse saturating Transformation Graphs generating means, are configured to according to disparity map and gray-scale maps
At least one of generating inverse saturating Transformation Graphs;
Width-distance map generating means, is configured to generate width-distance map according to the disparity map;And
Specific plane detection means, is configured to the matching based on the inverse saturating Transformation Graphs and the width-distance map to detect
The specific plane,
Wherein, the object that the inverse saturating Transformation Graphs are used to describing in the disparity map is in world coordinate system specific flat with described
Figure on the related actual plane in face represents, and the object that the width-distance map is used to describing in the disparity map is in width
Spend and apart from upper relation.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201310261784.9A CN104252706B (en) | 2013-06-27 | 2013-06-27 | Method and system for detecting specific plane |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201310261784.9A CN104252706B (en) | 2013-06-27 | 2013-06-27 | Method and system for detecting specific plane |
Publications (2)
Publication Number | Publication Date |
---|---|
CN104252706A CN104252706A (en) | 2014-12-31 |
CN104252706B true CN104252706B (en) | 2017-04-12 |
Family
ID=52187574
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201310261784.9A Expired - Fee Related CN104252706B (en) | 2013-06-27 | 2013-06-27 | Method and system for detecting specific plane |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN104252706B (en) |
Families Citing this family (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
KR102710789B1 (en) * | 2016-12-09 | 2024-09-26 | 현대자동차주식회사 | An apparatus and method for providing visualization information of a rear vehicle |
CN107248137B (en) * | 2017-04-27 | 2021-01-15 | 努比亚技术有限公司 | Method for realizing image processing and mobile terminal |
CN109215044B (en) * | 2017-06-30 | 2020-12-15 | 京东方科技集团股份有限公司 | Image processing method and system, storage medium, and mobile system |
Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
WO2012038485A1 (en) * | 2010-09-22 | 2012-03-29 | Henesis S.R.L. | Pantograph monitoring system and method |
CN102722705A (en) * | 2012-06-12 | 2012-10-10 | 武汉大学 | Method for detecting multi-lane line on basis of random sample consensus (RANSAC) algorithm |
CN102881016A (en) * | 2012-09-19 | 2013-01-16 | 中科院微电子研究所昆山分所 | Vehicle 360-degree surrounding reconstruction method based on internet of vehicles |
CN103177236A (en) * | 2011-12-22 | 2013-06-26 | 株式会社理光 | Method and device for detecting road regions and method and device for detecting separation lines |
-
2013
- 2013-06-27 CN CN201310261784.9A patent/CN104252706B/en not_active Expired - Fee Related
Patent Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
WO2012038485A1 (en) * | 2010-09-22 | 2012-03-29 | Henesis S.R.L. | Pantograph monitoring system and method |
CN103177236A (en) * | 2011-12-22 | 2013-06-26 | 株式会社理光 | Method and device for detecting road regions and method and device for detecting separation lines |
CN102722705A (en) * | 2012-06-12 | 2012-10-10 | 武汉大学 | Method for detecting multi-lane line on basis of random sample consensus (RANSAC) algorithm |
CN102881016A (en) * | 2012-09-19 | 2013-01-16 | 中科院微电子研究所昆山分所 | Vehicle 360-degree surrounding reconstruction method based on internet of vehicles |
Non-Patent Citations (7)
Title |
---|
An Integrated Obstacle Detection Framework for Intelligent Cruise Control on Motorways;Stefan Bohrer 等;《Intelligent Vehicles "95 Symposium. Proceedings of the. IEEE》;19950926;276-281 * |
GOLD: A Parallel Real-Time Stereo Vision System for Generic Obstacle and Lane Detection;Massimo Bertozzi 等;《IEEE Transactions on Image Processing》;19980131;第7卷(第1期);62-81 * |
IMPLEMENTATION OF INVERSE PERSPECTIVE MAPPING ALGORITHM FOR THE DEVELOPMENT OF AN AUTOMATIC LANE TRACKING SYSTEM;Anuar Mikdad Muad 等;《TENCON 2004. 2004 IEEE Region 10 Conference》;20041124;207-210 * |
Stereo inverse perspective mapping: theory and applications;Massimo Bertozzi 等;《Image and Vision Computing》;19980630;第16卷(第8期);585-590 * |
Stereo Vision-Based Obstacle Detection Using Dense Disparity Map;Chung-Hee Lee;《International Conference on Graphic and Image Processing (ICGIP 2011)》;20111001;170-177 * |
基于机器视觉的智能车辆前方道路边界及车道标识识别方法综述;余天洪 等;《公路交通科技》;20060130;第23卷(第1期);139-158 * |
智能车辆的障碍物检测研究方法综述;王荣本 等;《公路交通科技》;20071115;第24卷(第11期);110-124 * |
Also Published As
Publication number | Publication date |
---|---|
CN104252706A (en) | 2014-12-31 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN101512601B (en) | Method for determining a depth map from images, device for determining a depth map | |
CN103814306B (en) | Depth survey quality strengthens | |
Jancosek et al. | Exploiting visibility information in surface reconstruction to preserve weakly supported surfaces | |
CN103971404B (en) | 3D real-scene copying device having high cost performance | |
Rashidi et al. | Generating absolute-scale point cloud data of built infrastructure scenes using a monocular camera setting | |
Wan et al. | Stereo vision using two PTZ cameras | |
Zhou et al. | A two-step calibration method of lenslet-based light field cameras | |
Zhao et al. | Reconstruction of textured urban 3D model by fusing ground-based laser range and CCD images | |
CN111295667B (en) | Method for stereo matching of images and auxiliary driving device | |
Kuschk | Large scale urban reconstruction from remote sensing imagery | |
Kim et al. | Block world reconstruction from spherical stereo image pairs | |
CN104252706B (en) | Method and system for detecting specific plane | |
Jiang et al. | Development of a pavement evaluation tool using aerial imagery and deep learning | |
US20230012230A1 (en) | Systems, methods and programs for generating damage print in a vehicle | |
Lee et al. | Interactive 3D building modeling using a hierarchical representation | |
Brassart et al. | Experimental results got with the omnidirectional vision sensor: SYCLOP | |
CN115497061A (en) | Method and device for identifying road travelable area based on binocular vision | |
Lhuillier | Toward flexible 3d modeling using a catadioptric camera | |
Wu et al. | Emie-map: Large-scale road surface reconstruction based on explicit mesh and implicit encoding | |
CN118429524A (en) | Binocular stereoscopic vision-based vehicle running environment modeling method and system | |
Habib et al. | Integration of lidar and airborne imagery for realistic visualization of 3d urban environments | |
Grace et al. | Active shape from stereo for highway inspection | |
Zou et al. | Tunnel linings inspection using Bayesian-Optimized (BO) calibration of multiple Line-Scan Cameras (LSCs) and a Laser Range Finder (LRF) | |
Wang et al. | Real‐time fusion of multiple videos and 3D real scenes based on optimal viewpoint selection | |
Li et al. | Automatic Multi-Camera Calibration and Refinement Method in Road Scene for Self-driving Car |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
C06 | Publication | ||
PB01 | Publication | ||
C10 | Entry into substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant | ||
CF01 | Termination of patent right due to non-payment of annual fee |
Granted publication date: 20170412 |
|
CF01 | Termination of patent right due to non-payment of annual fee |