US20060013438A1 - Obstacle detection apparatus and a method therefor - Google Patents
Obstacle detection apparatus and a method therefor Download PDFInfo
- Publication number
- US20060013438A1 US20060013438A1 US11/178,274 US17827405A US2006013438A1 US 20060013438 A1 US20060013438 A1 US 20060013438A1 US 17827405 A US17827405 A US 17827405A US 2006013438 A1 US2006013438 A1 US 2006013438A1
- Authority
- US
- United States
- Prior art keywords
- image
- point
- region
- obstacle
- road surface
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Abandoned
Links
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V20/00—Scenes; Scene-specific elements
- G06V20/50—Context or environment of the image
- G06V20/56—Context or environment of the image exterior to a vehicle by using sensors mounted on the vehicle
- G06V20/58—Recognition of moving objects or obstacles, e.g. vehicles or pedestrians; Recognition of traffic objects, e.g. traffic signs, traffic lights or roads
-
- B—PERFORMING OPERATIONS; TRANSPORTING
- B60—VEHICLES IN GENERAL
- B60W—CONJOINT CONTROL OF VEHICLE SUB-UNITS OF DIFFERENT TYPE OR DIFFERENT FUNCTION; CONTROL SYSTEMS SPECIALLY ADAPTED FOR HYBRID VEHICLES; ROAD VEHICLE DRIVE CONTROL SYSTEMS FOR PURPOSES NOT RELATED TO THE CONTROL OF A PARTICULAR SUB-UNIT
- B60W40/00—Estimation or calculation of non-directly measurable driving parameters for road vehicle drive control systems not related to the control of a particular sub unit, e.g. by using mathematical models
- B60W40/02—Estimation or calculation of non-directly measurable driving parameters for road vehicle drive control systems not related to the control of a particular sub unit, e.g. by using mathematical models related to ambient conditions
- B60W40/04—Traffic conditions
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/30—Determination of transform parameters for the alignment of images, i.e. image registration
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/70—Determining position or orientation of objects or cameras
- G06T7/73—Determining position or orientation of objects or cameras using feature-based methods
- G06T7/74—Determining position or orientation of objects or cameras using feature-based methods involving reference images or patches
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/30—Subject of image; Context of image processing
- G06T2207/30248—Vehicle exterior or interior
Definitions
- the present invention relates to an obstacle detection apparatus, particularly to an obstacle detection apparatus to detect an obstacle on a road such as a preceding vehicle surround a vehicle, a parking vehicle or a pedestrian.
- a system which uses video cameras is capable of detecting not only obstacles but also lane markings on the road. This is an advantage of a system which uses video cameras. There is a big advantage capable of realizing a comparatively low cost system by use of a general-purpose device such as a camera.
- the stereo vision is based on the principle of triangulation.
- the three-dimensional position of an object can be obtained if the correspondence of the projected images of the object on the left and the right cameras is provided.
- the points P and P′ are not a set of correct corresponding points. In this case, the partial images do not match well.
- This technique is called a plane projection stereo method and disclosed in Japanese Patent Laid-Open No. 2001-76128, for example.
- the plane projection stereo method has an advantage that calibration is easy and corresponding point search is not necessary. Though this technique can separate a road area and an obstacle area, it has a problem that it cannot grasp the precise position and distance.
- An aspect of the present invention provides an apparatus of detecting an object on a road surface comprising: a stereo set of video cameras mounted on a vehicle to produce right and left images on a road surface; an image storage unit configured to store the right and left images; a parameter computation unit configured to compute a parameter representing road planarity constraint based on the images stored in the image storage unit; a corresponding point computation unit configured to compute correspondence between a first point on one image of the right and left images on the road surface and a second point on other image of the right and left images, which corresponds to the first point, based on the parameter; an image transformation unit configured to produce a transformed image from the one image using the correspondence; and a detector to detect an object having a dimension larger than a given value in a substantially vertical direction with respect to the road surface, using the correspondence and the transformed image.
- Another aspect of the present invention provides a method of detecting an object on a road surface comprising: acquiring right and left images by a set of stereo video cameras mounted on a vehicle; storing the right and left images in a storage unit; obtaining a parameter representing road planarity constraint based on the right and left images stored in the storage unit; computing the correspondence between a first point set to one image of the right and left images on the road surface and a second point on other image of the right and left images, which corresponds to the first point, based on the parameter; generating a transformed image from the one image using the correspondence; and detecting as an obstacle an object having a height larger than a given value with respect to the road surface, using the correspondence and the transformed image.
- FIG. 1 shows a perspective view of a car mounted with two video cameras and an obstacle detection apparatus according to a first embodiment of the present invention.
- FIG. 2 is a diagram showing a image captured with a left side video camera.
- FIG. 3 is a diagram showing a image captured with a right side video camera.
- FIG. 4 is a diagram showing a transformed image of the right image.
- FIG. 5 is a block circuit diagram of the obstacle detection apparatus of the first embodiment of the present invention.
- FIG. 6 is a diagram showing a search region to be searched with the obstacle detection apparatus.
- FIG. 7 is a diagram for explaining obstacle detection according to the first embodiment of the present invention.
- FIG. 8 is a flowchart for explaining obstacle detection according to the first embodiment of the present invention.
- FIG. 9 is a diagram representing correspondence relation of a grounding position of an obstacle with a road surface and right and left images.
- FIG. 10 is a diagram representing position relation of a road surface and an obstacle to right and left cameras.
- FIG. 11 is a block diagram of an obstacle detection apparatus of a second embodiment of the present invention.
- FIG. 12 is a diagram for explaining match between a target image and a reference image.
- FIG. 13 is a diagram for explaining a transformed road surface image and a transformed road surface image.
- FIG. 14 is a diagram for explaining relation between a road surface, an obstacle and a boundary line of the road surface and the obstacle.
- FIG. 15 is a diagram for explaining segmentation of an image into partial images and matching result of a partial image.
- FIG. 16 is a flowchart of boundary line function optimization process.
- FIG. 17 is a flowchart of a subroutine 1 in the grounding line function optimization process.
- FIG. 18 is a diagram for explaining occlusion due to an obstacle.
- FIG. 19 is a block diagram of a processor to execute the present invention by software.
- An obstacle detection apparatus of the first embodiment of the present invention uses left and right video cameras 11 L and 11 R mounted on a car at right and left front positions thereof as shown in FIG. 1 . These video cameras 11 L and 11 R are considered on a camera model according to the following assumption:
- An area comparatively far from the cameras is set to an object area.
- the optical axes of the right and left cameras are approximately parallel with each other and toward an approximately horizontal direction, and the vertical axis of an imaging surface is toward an approximately vertical direction.
- Projection points (ul, vl) (ur, vr) of points existing on the road surface to the right and left images are associated with each other by the affine transformation based on the equation (1′) as shown by the following equation.
- ( u r v r ) ( a t b t c t d t ) ⁇ ( u 1 v 1 ) + ( e t f t ) ( 1 ′′ )
- the projection points of a point existing on the road surface to the right and left images are associated with each other by affine transformation similarly.
- the parameter of this affine transformation can be computed using correspondence relation between feature points not less than four points existing on the road surface.
- the hypothesis test is performed based on assumption that a point of interest is a road area.
- this method is not very reliable for several reasons. Firstly, a mirrored image of an obstacle in a reflective surface such as wet road surface in the rain seemingly has a negative height thus fails the hypothesis test. Secondly, specular reflection sometimes causes significant difference between the left and the right images which affects the accuracy of the hypothesis test.
- projection points in the right and left images on the road surface correspond to each other according to a relational expression such as the equation (3). Further, it is determined whether there is an obstacle having a height more than a given height at a point on the road by assuming obstacles stand nearly perpendicular to a road surface. An example of this determination method will be described hereinafter.
- FIG. 2 shows an example of the left image captured by the left video camera
- FIG. 3 shows an example of the right image captured by the right video camera
- FIG. 4 shows a transformed image obtained by subjecting the image of FIG. 2 to affine transformation.
- a point P 1 of FIG. 2 is assumed to be included in a road region
- a corresponding point of FIG. 3 is set to Pr.
- an area A 1 including the point P 1 and spreading upward from the point P 1 as shown in FIG. 2 is considered, if this area Al is a road area, it should match with an area Ar 2 of FIG. 4 .
- the area Al is an obstacle grounded at the point Pl, it should match with an area Ar 1 of FIG. 2 . Accordingly, it is possible to determine whether the point P 1 is a road area or an obstacle by comparing the region Al with both regions Ar 1 and Ar 2 .
- the present embodiment is based on the above determination method, and identify an obstacle on a road surface and a road area by using two video cameras mounted on a car as shown in FIG. 1 .
- it is assumed to detect an obstacle existing on a road plane such as a pedestrian, a preceding car and a parked car, using two right and left video cameras 11 L and 11 R mounted on a car as shown in FIG. 1 .
- FIG. 5 shows a schematic configuration of an obstacle detection apparatus of the present embodiment, which comprises an image input unit 12 , an image storage unit 13 , a parameter computation unit 14 , a corresponding point computation unit 15 , an image transformation unit 16 and a detection unit 17 .
- This obstacle detection apparatus computes a relational equation (referred to as road planarity constraint) established between projection position on right and left images at a point on the road surface to identify an obstacle existing on the road surface and a road area.
- the image storage unit 13 stores images input by the right and left video cameras 11 L and 11 R of the image input unit in an image memory.
- the parameter computation unit 14 computes the parameter of the road planarity constraint on the basis of two images captured by the right and left video cameras 11 L and 11 R, respectively, and stored in the image storage unit 13 , that is, the images shown in FIGS. 2 and 3 .
- a concrete computation of the parameter is done as follows.
- the parameter computation unit 14 computes a road planarity constraint of the road surface with a vanishing point and two lane marking lines obtained by the feature extractor 3 while the car is stopping. Suppose a point (X, Y, Z) in a three-dimensional space is projected to a point (u, v).
- h (h11, h12, . . . , t3)
- T indicates parameters concerning the posture of the camera, a position thereof, a focal distance and the center of the image. Since a uniform scalar change of the parameters does not change the relationship, h32 is set to unity for simplicity.
- u h 11 ⁇ X + h 12 ⁇ Y + t 1 h 31 ⁇ X + Y + t 3
- v h 21 ⁇ X + h 22 ⁇ Y + t 2 h 31 ⁇ X + Y + t 3 ( 4 )
- a camera model is considered under the following premise here.
- ⁇ is a Y-direction deviation of a coordinate origin with respect to a median of view points of the right and left cameras as shown in FIG. 5
- t3 ⁇ + ⁇ t3.
- the equation (4) can be simplified as the following equation: u ⁇ h 11 ⁇ X + h 12 ⁇ Y + t 1 Y + ⁇ , v ⁇ h 21 ⁇ X + h 22 ⁇ Y + t 2 Y + ⁇ ( 6 )
- the matrix of the right-hand member is assumed to be M.
- t (u0, v0)
- (h12, h22)T t.
- X (X/Yc, 1/Yc) T
- projection points of the point P on the road surface to right and left images are ul and ur
- u l ⁇ t l M l X
- u r ⁇ t r M r X (8)
- the equation (19) is a road planarity constraint to an inclined surface.
- the corresponding point computation unit 15 computes a position of the corresponding point of the input point on the other image, and output a result.
- the image transformation unit 16 transforms the right image as shown in FIG. 2 so that the region satisfying the road planarity constraint in the right image matches with the left image as shown in FIG. 3 , using correspondence relation between the right and left images obtained by the corresponding point calculation part 15 , to make a transformed image as shown in FIG. 4 and store it to the image storage unit 13 .
- the detector 17 sets an obstacle search region as shown in, for example, FIG. 6 to one image (for example, the left image), and determines whether points in the search region are obstacles or not.
- the detector 17 detects an obstacle using the left image, right image and transformed image shown in FIG. 7 which are stored in the image storage unit 13 .
- a vertical strip region Al is set to the left image as shown in FIG. 7 and every hypothesized grounding position of an obstacle in the region Al is tested whether or not there is an obstacle at the grounding position.
- a point P 1 in the left image of FIG. 7 is a grounding position of an obstacle
- the points on the right image and transformed image of FIG. 7 which correspond to the point P 1 of the left image are Pr 1 and Prt 1 , respectively.
- the part of the region Al above the point P 1 corresponds to the region Ar 1 of the right image shown in FIG. 7
- the part of the region Al below the point P 1 corresponds to a region below the point Prt 1 of the region Art in the transformed image of FIG. 7 .
- matching between an image of the region Al and the corresponding region in the right image and the transformed image are computed by normalized correlation or whatever method which computes goodness of a match between two partial images.
- the points of the right image and transformed image of FIG. 7 which correspond to the point P 2 of the left image are Pr 2 and Prt 2 , respectively.
- the region above the point P 2 on the image of the region Al corresponds to the right image region Ar 2 of FIG. 7
- the region below the point P 2 on the image of the region Al corresponds to the region below the point Prt 2 on the region Art of the transformed image of FIG. 7 .
- matching between an image of the region Al in the left image of FIG. 7 and the corresponding region in the right image and the transformed image are computed by normalized correlation or whatever method which computes goodness of a match between two partial images.
- the points of the right image and transformed image of FIG. 7 which correspond to the point P 3 of the left image are points Pr 3 and Prt 3 , respectively.
- the region above the point P 3 on the image of the region Al corresponds to the region Ar 3 of FIG. 7
- the region below the point P 3 of the image of the region Al corresponds to the region below the point Prt 3 of the region Art in the transformed image.
- matching between an image of the region Al in the left image of FIG. 7 and the corresponding region in the right image and the transformed image are computed by normalized correlation or whatever method which computes goodness of a match between two partial images.
- the image signal (luminance signal) of the region Ar 2 of the right image is added to the image signal (luminance signal) of the region below the point Prt 2 on the region Art of the transformed image, and the addition result is compared with the signal of the region Al of the left image, the matching of the images is computed.
- the image signal (luminance signal) of the region Ar 3 of the right image is added to the image signal (luminance signal) of the region below the point Prt 3 on the region Art of the transformed image, and the addition result is compared with the signal of the region Al of the left image, the matching of the images is computed.
- a graph showing matching of images as shown in FIG. 7 is formed.
- the point which provides the highest match in the graph is assumed to be the grounding position of the obstacle in the region Al. If this procedure is repeated while shifting the vertical strip region Al to the horizontal direction, it is possible to obtain in stable and accurate grounding positions of obstacles over the entire image.
- FIG. 8 shows the entire flow of obstacle detection according to the embodiment of the present invention.
- right and left images are input from a video camera, they are stored in a memory (S 1 , S 2 ).
- a parameter of road planarity constraint is computed based on the stored right and left images (S 3 ).
- the road planarity constraint parameter is applied to a point of the left image, the position of a point of the right image corresponding to the point of the left image is computed (S 4 ).
- the right image is subjected to affine transformation to match with the left image using correspondence relation between a set point of the left image and the computed point of the right image.
- the transformed image is stored in the memory (S 5 , S 6 ).
- the stored image is read (S 8 ), and the strip region A 1 is set to the left image with respect to the point Pl of the left image as shown in FIG. 7 (S 8 ).
- a region Ar (or Ar 1 ) is set to the right image with respect to the point Pr (or Pr 1 ) corresponding to the point P 1 (S 9 ).
- a region Art is set to the transformed image with respect to the point Prt (or Prt 1 ) corresponding to the point P 1 (S 10 ).
- the match of the images is obtained with respect to the region Al of the set left image, the region Ar of the right image and the transformed image Art (S 11 ).
- the point P 1 of the left image is updated to the point P 2 in vertical direction (S 12 ). It is determined whether the update is done n times (S 13 ). When this determination is NO, and the process returns to step S 7 , the process of the steps S 7 to S 13 is repeated with respect to the point P 2 .
- step 13 When the determination of step 13 is YES, obstacle detection is determined from the matching result of the images (S 14 ). In this time, when there is a peak in a waveform showing the goodness of the match of the images, that is, obstacle detection is determined, the process is finished. However, if there is no peak in the waveform showing matching of the images, the point Pl of the left image is shifted in horizontal direction to be reset (S 15 ), and the process from step 7 is done again.
- an obstacle detection signal is output.
- the obstacle detection signal is transmitted to a safety driving support apparatus or an automotive vehicle for supporting safety driving of a car.
- the present invention can realize a process shown in FIG. 8 by software as well as hardware.
- the present invention can be applied not only to a car running a general road or a highway but also to an automotive vehicle or automotive inspection vehicle, which moves a yard road, a tunnel or an indoor path.
- general stereovision is not very reliable is that it tries to estimate depth for every pixel in an image. Since triangulation requires a point correspondence for each measurement, general stereovision is a problem of estimating N parameters from N data, where N is the number of pixels in an image. It is very difficult to obtain statistically stable estimation of the parameters in this problem setting.
- the present embodiment uses not only road planarity constraint used in a plane projection stereo, but also obstacle planarity constraint, which gives correspondence between the left and right projections of a point which has a certain depth.
- the obstacle planarity constraint is given by the following equation (20) similar to the road planarity constraint.
- ⁇ ⁇ ′ g 21 ⁇ u + g 22 ⁇ ⁇ + g 23 g 31 ⁇ u + g 32 ⁇ ⁇ + g 33 ( 20 )
- the corresponding point P′ on the image 2 of the point P is given by the road planarity constraint equation of the equation (1).
- the point on the obstacle O has the same depth as the point P due to the assumption that the obstacle stands perpendicular to the road surface. Accordingly, the parameter of the obstacle surface constraint equation (20) can be determined by the depth of the point P.
- the region A′ in the reference image which corresponds to the partial region above the point P in a rectangular region A in the target image can be obtained by the obstacle surface constraint.
- the region A′′ corresponding to the partial region in the reference image can be obtained by the road planarity constraint equation (1).
- a single grounding point parameter uniquely determine the corresponding region in the reference image which corresponds to the region A in the target image.
- the width of the target image and the height of the image below the horizon are W and H respectively and the image below the horizon in the target image is divided into W columns which have 1 pixel in width and H pixels in height. If one grounding point parameter is given to each column, correspondence relation from the target image to the reference image is uniquely determined. Match between two images is measured based on this correspondence relation, the obstacle can be detected by obtaining a series of grounding point parameters (boundary line between obstacles and the road surface) to maximize the match.
- This is a problem of estimating W parameters from a pair of images each which has W by H pixels. Since the number of data is much larger than the number of parameters to be estimated, statistically stable estimation can be obtained. Because this is optimization of one-dimensional function referred to as the grounding line, an optimization method of good efficiency such as Dynamic Programming can be applied to this embodiment.
- the second embodiment of the present invention will be described referring to drawings.
- the present embodiment assumes circumstances detecting an obstacle existing on road surface, for example, a pedestrian, a preceding car, a parked vehicle using a stereo camera unit having right and left cameras mounted on a car similarly to FIG. 1 .
- the cameras are arranged so that the region in the three-dimensional space from which an obstacle is to be detected is included in the visual field of all cameras.
- the images captured with the right and left cameras respectively are stored in the storage device of a computer.
- the optical axes of the two cameras are approximately parallel with each other. Further the plane including two optical axes is substantially parallel with the road surface.
- the cameras are arranged so that position deviation between two cameras with respect to the direction of the cameras optical axis is minute in comparison with the depth of the obstacle.
- the above arrangement of the cameras largely decreases an arithmetic operation quantity required for conversion of the image and computation of correspondence relation between coordinates. Accordingly, if there is no constraint from the outside in regard to arrangement of the cameras, the above camera arrangement is preferable. However, it should be noted that the present embodiment is not limited to the above camera arrangement. It is preferable that the cameras are identical to each other in internal parameters such as focal distance or size of the image plane, but this is not always the necessary condition.
- FIG. 11 shows a block diagram of an obstacle detection apparatus of the present embodiment.
- Right and left cameras 21 R and 21 L are connected to an image storage unit 22 storing image signals output from the cameras.
- An image storage unit 22 is connected to an image matching computation unit 23 which computes matching between a target image and a reference image.
- the output port of the image matching computation unit 23 is connected to a boundary line function optimizer 24 and a correspondence computation unit 25 .
- the correspondence computation unit 25 computes correspondence between the target image and the reference image from the grounding position of the obstacle with the road surface in the road surface region of the target image.
- This correspondence computation unit 25 includes a road region corresponding point computation module 25 - 1 computing correspondence concerning the road surface region of target image, and an obstacle region corresponding point computation module 25 - 2 computing correspondence concerning the obstacle region of the target image.
- the road region corresponding point computation module 25 - 1 computes at first a parameter of the road planarity constraint equation (1) by means of techniques described in Japanese Patent Laid-Open No. 2001-76128, the entire contents of which are incorporated herein by reference, and computes a coordinate (u′, v′) of the point of the reference image which corresponds to a coordinate (u, v) of a point in the target image, using the road planarity constraint equation (1).
- Obstacle region corresponding point computation module 25 - 2 computes at first a parameter of the obstacle surface constraint equation (20) with respect to a designated depth d or a grounding point P of the road surface with the obstacle, and computes a coordinate (u′, v′) of the point of the reference image which corresponds to a coordinate (u, v) of a point in the target image, using the obstacle constraint equation (20).
- the parameter of the obstacle surface constraint equation (20) is computed as follows.
- n T m d (22)
- Equation (21) can be transformed to the following equation (23) by substituting the equation (22) into the equation (21).
- m ′ ( R + t ⁇ ⁇ n T d ) ⁇ m ( 23 )
- An equation (20) is provided from the equations (23), (24) and (25). If calibration of two cameras 21 L and 21 R is done, a rotating matrix R, a translation vector t and a depth direction n are known. Accordingly, if the depth parameter d is given, a parameter of the equation (20) is uniquely determined.
- a coordinate (u, v) of a projection point of the grounding point P of the road surface with the obstacle to one image instead of the depth parameter d is given as input
- a coordinate (u′, v′) of a projection point of the point P to the other image is computed using the road planarity constraint equation (1). It is possible to obtain the depth d by triangulation using this correspondence.
- the obstacle surface is approximately parallel with the image plane, and a pattern on the obstacle surface is projected to the image plane of each of the right and left images by receiving only the same scale change.
- the coordinates (u, v) and (u′, v′) of the points P on the obstacles on the right and left images correspond to each other by the following equation using two-dimensional rotation matrix Q to associate with angles around the optical axes of the right and left camera coordinate systems, and the translation vector s. This is referred to as a simplified obstacle planarity constraint.
- a line parallel to the image planes of the cameras is drawn on the road surface beforehand, and an angle between the line segments of the line, which are projected on the right and left images is computed.
- This angle is assumed to be a rotation angle of the rotation matrix Q. Assuming that a relative position between the cameras does not change while the car is traveling. In this case, since the parameter of the rotating matrix Q is constant, the simplified obstacle planarity constraint changes only the translation vector s according to the position of the obstacle surface.
- the translation vector s can be computed by giving a set of coordinates (u, v) and (u′, v′) of points concerning the point P on the obstacle surface and projected on the right and left images.
- the coordinates (u, v) and (u′, v′) can be provided by the road planarity constraint.
- the image match computation unit 23 When the image match computation unit 23 receives a target region image obtained by clipping a part of the target image such as the region A shown in FIG. 12 , and a coordinate of a grounding point of the obstacle with the road surface on the target image as a parameter, it computes match between the target image and the image of the regions of the reference image which corresponds to it.
- the region A′′ of FIG. 12 corresponding to it in the reference image is computed with the road region corresponding point computation module 25 - 1 .
- the match between the regions is computed based on the correspondence and is output as road region matching value.
- the region A′ of FIG. 12 corresponding to it in the reference image is computed with the road region corresponding point computation module 25 - 2 .
- the match between the regions is computed based on the correspondence and is output as obstacle region matching value.
- the matching value may be obtained by comparing, for example, brightness values of the corresponding points to output 1 when a brightness difference is less than a threshold, and otherwise 0 .
- the matching value may be obtained by clipping small regions near the corresponding point, calculating normalized correlation between the regions, and carry out threshold processing similar to the above.
- the brightness value is affected by various parameters such as relative position between a camera and a light source, or a value of an aperture of the camera, even if a set of correct corresponding points are detected, the brightness value may differ between the cameras. In such a case, it is preferable to use means for comparing the shapes of local images similar to normalized correlation of a small region rather than comparing brightness values directly.
- the road region corresponding point computation module 25 - 1 produces a road surface transformed image in which the road surface region of the reference image matches with road surface region of the target image.
- the obstacle region corresponding point computation module 25 - 2 produces an obstacle surface transformed image in which the obstacle region having a constant depth in the reference image matches with the obstacle region having the same depth as the target image by translation.
- the road region corresponding point computation module 25 - 1 computes match between the road surface region and the road surface transformed image obtained by transforming the reference image.
- the road region corresponding point computation module 25 - 2 computes match between the obstacle region and the road surface transformed image obtained by transforming the reference image.
- the same road planarity constraint is used for the whole image. Therefore, the road surface image transformation needs to be done only once.
- the obstacle planarity constraint differs according to the grounding position of the obstacle to the road surface. Therefore, the obstacle surface transformation needs to be done several times.
- this transformation is done to correct deformation of the image when matching value of the images is computed, it is not always necessary to generate the transformed image for all grounding positions, and only a few obstacle surface transformed images are sufficient.
- the grounding position of the obstacle differs due to only a difference of quantity of parallel displacement. Accordingly, the obstacle surface transformed image is generated only for a certain one grounding position decided properly, and the obstacle surface transformed image for another grounding position is creased by correcting the already created obstacle surface transformed image by the difference of the quantity of parallel displacement. In this case, obstacle surface transformation needs to be done only once
- the boundary line function optimization module 24 sets a boundary line between the obstacle and the road surface to the road surface region on the target image as shown in FIG. 14 , it computes a boundary line function to maximize match between the target image and the reference image which is computed with the image matching value computation unit 23 , based on correspondence of target image and reference image calculated by correspondence computation unit 25 .
- a boundary line function using Dynamic Programming in the present embodiment referring to flowcharts of FIGS. 16 and 17 .
- an image of a left camera 21 L is assumed to be a target image
- an image of right camera 21 R is assumed to be a reference image.
- Matching value f i (v i ) of the road region of the region Ai and matching value g i (v i ) of the obstacle region thereof are computed by the matching value computation unit 25 and the image match computation unit 23 .
- the subroutine 1 is carried out to compute C i (v i ) which is the maximum matching value for regions from A l to A i when the grounding position in the region A i is v i (step S 23 ).
- the subroutine 1 provides an ordinate v i-1 of the vertical position as shown in a flow of FIG. 17 (step 23 - 1 ), and computes matching C i-1 (v i-1 )+c i (v i ,v i-1 ) of the image when the grounding line passes through v i-1 and reaches v i (S 23 - 2 ).
- This computed result is stored in a storage unit.
- Such a process is done repeatedly while varying v i-1 , and the maximum value of image match C i-1 (v i-1 )+c i (v i , v i-1 ) and the value of path V i-1 at that time are obtained from the result and make a return value (S 23 - 3 ).
- subroutine 1 assuming that the grounding position in the region A i is v i , and matching value of the region Ai when the grounding position in the region A i-1 is v i-1 is c i (v i ,v i-1 ), the maximum value C i (v i ) of matching value from the region A i to the region A i when the grounding position in the region A i is v i is calculated.
- C i (v i ) is calculated recursively as follows:
- v W * arg max vW C W (v W ) is selected (step S 25 ).
- an obstacle exists on the right side of the interest region such as the region B of FIG. 18
- the obstacle further than the point P is not viewed from the right camera because of occlusion.
- the condition where such occlusion occurs is v i ⁇ v i-1 ⁇ , where ⁇ is a constant which is determined based on the road planarity constraint equation.
- the optimization method using Dynamic Programming is described hereinbefore.
- another optimization method such as variational method can also be used for the calculation of the boundary line function.
- the obstacle detection apparatus to explain in the present embodiment can accurately and robustly detect the grounding position of the obstacle to the road surface by computing the boundary line function to maximize match between the target image and reference image input from the cameras in turn.
- a processor comprising CPU 31 , HDD 32 and memory 33 as shown in FIG. 19 is used.
- a program to execute the embodiment and image information provided by the cameras are stored in the HDD 32 .
- the program read from HDD 32 is stored to memory 33 , and CPU 31 executes obstacle detection as reading the image information from HDD 32 according to the program stored to the memory 33 .
- the present invention it is possible to detect obstacles on the road surface with high reliability and less computation by using the images acquired with multiple cameras mounted on a car.
- the present invention it is possible to detect the position of an obstacle existing on a road surface precisely and with high reliability. Since a complicated calibration required in general stereovision can be largely simplified similarly to a plane projection stereo method, a cost required for installation of an apparatus can be largely reduced.
Landscapes
- Engineering & Computer Science (AREA)
- Physics & Mathematics (AREA)
- Theoretical Computer Science (AREA)
- General Physics & Mathematics (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Mathematical Physics (AREA)
- Mechanical Engineering (AREA)
- Transportation (AREA)
- Automation & Control Theory (AREA)
- Multimedia (AREA)
- Image Processing (AREA)
- Image Analysis (AREA)
- Measurement Of Optical Distance (AREA)
- Traffic Control Systems (AREA)
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
US12/000,025 US7660434B2 (en) | 2004-07-13 | 2007-12-07 | Obstacle detection apparatus and a method therefor |
Applications Claiming Priority (4)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
JP2004-205857 | 2004-07-13 | ||
JP2004205857 | 2004-07-13 | ||
JP2005-122547 | 2005-04-20 | ||
JP2005122547A JP4406381B2 (ja) | 2004-07-13 | 2005-04-20 | 障害物検出装置及び方法 |
Related Child Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
US12/000,025 Division US7660434B2 (en) | 2004-07-13 | 2007-12-07 | Obstacle detection apparatus and a method therefor |
Publications (1)
Publication Number | Publication Date |
---|---|
US20060013438A1 true US20060013438A1 (en) | 2006-01-19 |
Family
ID=35599464
Family Applications (2)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
US11/178,274 Abandoned US20060013438A1 (en) | 2004-07-13 | 2005-07-12 | Obstacle detection apparatus and a method therefor |
US12/000,025 Active 2025-09-05 US7660434B2 (en) | 2004-07-13 | 2007-12-07 | Obstacle detection apparatus and a method therefor |
Family Applications After (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
US12/000,025 Active 2025-09-05 US7660434B2 (en) | 2004-07-13 | 2007-12-07 | Obstacle detection apparatus and a method therefor |
Country Status (2)
Country | Link |
---|---|
US (2) | US20060013438A1 (zh) |
JP (1) | JP4406381B2 (zh) |
Cited By (27)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20090041337A1 (en) * | 2007-08-07 | 2009-02-12 | Kabushiki Kaisha Toshiba | Image processing apparatus and method |
EP2063404A1 (en) * | 2007-11-23 | 2009-05-27 | Traficon | A detector for detecting traffic participants. |
US20090214081A1 (en) * | 2008-02-25 | 2009-08-27 | Kabushiki Kaisha Toshiba | Apparatus and method for detecting object |
EP2180426A1 (fr) * | 2008-10-24 | 2010-04-28 | Valeo Vision | Procédé de détection d'un objet cible pour véhicule automobile |
US20100318914A1 (en) * | 2009-06-16 | 2010-12-16 | Microsoft Corporation | Viewer-centric user interface for stereoscopic cinema |
DE102009031650A1 (de) | 2009-07-03 | 2011-01-05 | Volkswagen Ag | Verfahren zur Erweiterung eines Kamerasystems, Kamerasystem, Fahrerassistenzsysem und entsprechendes Fahrzeug |
US20110091096A1 (en) * | 2008-05-02 | 2011-04-21 | Auckland Uniservices Limited | Real-Time Stereo Image Matching System |
US20120075428A1 (en) * | 2010-09-24 | 2012-03-29 | Kabushiki Kaisha Toshiba | Image processing apparatus |
US20140104393A1 (en) * | 2011-06-06 | 2014-04-17 | Panasonic Corporation | Calibration device and calibration method |
US20140218481A1 (en) * | 2011-09-07 | 2014-08-07 | Continental Teves Ag & Co. Ohg | Method for Determining Whether a Vehicle can Pass Through an Object by Means of a 3-D Camera |
US20150098623A1 (en) * | 2013-10-09 | 2015-04-09 | Fujitsu Limited | Image processing apparatus and method |
US9122936B2 (en) | 2012-11-13 | 2015-09-01 | Kabushiki Kaisha Toshiba | Detecting device, detection method, and computer program product |
US20160171892A1 (en) * | 2012-02-24 | 2016-06-16 | Magna Electronics Inc. | Driver assistance system with path clearance determination |
US9547805B1 (en) * | 2013-01-22 | 2017-01-17 | The Boeing Company | Systems and methods for identifying roads in images |
CN106444837A (zh) * | 2016-10-17 | 2017-02-22 | 北京理工大学 | 一种无人机避障方法及系统 |
CN106503653A (zh) * | 2016-10-21 | 2017-03-15 | 深圳地平线机器人科技有限公司 | 区域标注方法、装置和电子设备 |
CN108399398A (zh) * | 2018-03-22 | 2018-08-14 | 武汉云衡智能科技有限公司 | 一种基于深度学习的无人驾驶汽车障碍物识别检测方法 |
CN109074668A (zh) * | 2018-08-02 | 2018-12-21 | 深圳前海达闼云端智能科技有限公司 | 路径导航方法、相关装置及计算机可读存储介质 |
US20190220997A1 (en) * | 2018-01-16 | 2019-07-18 | Aisin Seiki Kabushiki Kaisha | Self-position estimation apparatus |
EP3534299A1 (en) * | 2018-02-28 | 2019-09-04 | 2236008 Ontario, Inc. | Rapid ground-plane discrimination in stereoscopic images |
CN111091049A (zh) * | 2019-11-01 | 2020-05-01 | 东南大学 | 一种基于反向特征匹配的路面障碍物检测方法 |
US10678259B1 (en) * | 2012-09-13 | 2020-06-09 | Waymo Llc | Use of a reference image to detect a road obstacle |
US10868974B2 (en) * | 2010-12-01 | 2020-12-15 | Magna Electronics Inc. | Method for determining alignment of vehicular cameras |
US20210366155A1 (en) * | 2020-05-20 | 2021-11-25 | Beijing Baidu Netcom Science And Technology Co., Ltd. . | Method and Apparatus for Detecting Obstacle |
US11294053B2 (en) * | 2019-02-08 | 2022-04-05 | Aisin Seiki Kabushiki Kaisha | Object detection device |
US20220114813A1 (en) * | 2020-12-25 | 2022-04-14 | Apollo Intelligent Connectivity (Beijing) Technology Co., Ltd. | Detecting obstacle |
US12132986B2 (en) * | 2021-12-12 | 2024-10-29 | Avanti R&D, Inc. | Computer vision system used in vehicles |
Families Citing this family (12)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP4426535B2 (ja) * | 2006-01-17 | 2010-03-03 | 本田技研工業株式会社 | 車両の周辺監視装置 |
JP4900377B2 (ja) * | 2008-12-16 | 2012-03-21 | 株式会社デンソー | 画像処理装置 |
JP5190712B2 (ja) * | 2009-03-24 | 2013-04-24 | アイシン精機株式会社 | 障害物検出装置 |
AU2010200875A1 (en) * | 2010-03-09 | 2011-09-22 | The University Of Sydney | Sensor data processing |
JP5472928B2 (ja) * | 2010-12-10 | 2014-04-16 | 株式会社東芝 | 対象物検出装置及び方法 |
US9191650B2 (en) * | 2011-06-20 | 2015-11-17 | National Chiao Tung University | Video object localization method using multiple cameras |
JP5623362B2 (ja) * | 2011-09-28 | 2014-11-12 | 本田技研工業株式会社 | 段差部認識装置 |
KR101896715B1 (ko) * | 2012-10-31 | 2018-09-07 | 현대자동차주식회사 | 주변차량 위치 추적 장치 및 방법 |
CN108629227B (zh) * | 2017-03-15 | 2021-04-06 | 纵目科技(上海)股份有限公司 | 在图像中确定车辆左右边界的方法及系统 |
WO2018230342A1 (ja) * | 2017-06-12 | 2018-12-20 | 日立オートモティブシステムズ株式会社 | 車両運転支援装置 |
CN111971682B (zh) * | 2018-04-16 | 2024-07-05 | 三菱电机株式会社 | 路面检测装置、图像显示装置、障碍物检测装置、路面检测方法、图像显示方法以及障碍物检测方法 |
EP3974779A4 (en) * | 2019-07-19 | 2023-01-11 | Siemens Ltd., China | ROBOT HAND-EYE CALIBRATION METHOD AND APPARATUS, COMPUTER DEVICE, MEDIA AND PRODUCT |
Citations (12)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US5555175A (en) * | 1993-11-10 | 1996-09-10 | Eurocopter France | Method and device for assistance with the piloting of an aircraft |
US5719954A (en) * | 1994-06-07 | 1998-02-17 | Matsushita Electric Industrial Co., Ltd. | Stereo matching method and disparity measuring method |
US5937079A (en) * | 1996-09-05 | 1999-08-10 | Daimler-Benz Ag | Method for stereo image object detection |
US20010019356A1 (en) * | 2000-02-29 | 2001-09-06 | Nobuyuki Takeda | Obstacle detection apparatus and method |
US6385334B1 (en) * | 1998-03-12 | 2002-05-07 | Fuji Jukogyo Kabushiki Kaisha | System and method for adjusting stereo camera |
US20020191837A1 (en) * | 2001-05-23 | 2002-12-19 | Kabushiki Kaisha Toshiba | System and method for detecting obstacle |
US20030141965A1 (en) * | 2002-01-25 | 2003-07-31 | Altra Technologies Incorporated | Trailer based collision warning system and method |
US20030185421A1 (en) * | 2002-03-28 | 2003-10-02 | Kabushiki Kaisha Toshiba | Image processing apparatus and method |
US20040096082A1 (en) * | 2002-08-28 | 2004-05-20 | Hiroaki Nakai | Obstacle detection device and method therefor |
US20040252864A1 (en) * | 2003-06-13 | 2004-12-16 | Sarnoff Corporation | Method and apparatus for ground detection and removal in vision systems |
US6963661B1 (en) * | 1999-09-09 | 2005-11-08 | Kabushiki Kaisha Toshiba | Obstacle detection system and method therefor |
US7260243B2 (en) * | 2002-08-30 | 2007-08-21 | Fuji Jukogyo Kabushiki Kaisha | Intruding-object detection apparatus |
Family Cites Families (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP2000293693A (ja) | 1999-03-31 | 2000-10-20 | Toshiba Corp | 障害物検出方法および装置 |
JP4256992B2 (ja) | 1999-09-09 | 2009-04-22 | 株式会社東芝 | 障害物検出装置 |
JP3868876B2 (ja) * | 2002-09-25 | 2007-01-17 | 株式会社東芝 | 障害物検出装置及び方法 |
-
2005
- 2005-04-20 JP JP2005122547A patent/JP4406381B2/ja not_active Expired - Fee Related
- 2005-07-12 US US11/178,274 patent/US20060013438A1/en not_active Abandoned
-
2007
- 2007-12-07 US US12/000,025 patent/US7660434B2/en active Active
Patent Citations (13)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US5555175A (en) * | 1993-11-10 | 1996-09-10 | Eurocopter France | Method and device for assistance with the piloting of an aircraft |
US5719954A (en) * | 1994-06-07 | 1998-02-17 | Matsushita Electric Industrial Co., Ltd. | Stereo matching method and disparity measuring method |
US5937079A (en) * | 1996-09-05 | 1999-08-10 | Daimler-Benz Ag | Method for stereo image object detection |
US6385334B1 (en) * | 1998-03-12 | 2002-05-07 | Fuji Jukogyo Kabushiki Kaisha | System and method for adjusting stereo camera |
US6963661B1 (en) * | 1999-09-09 | 2005-11-08 | Kabushiki Kaisha Toshiba | Obstacle detection system and method therefor |
US20010019356A1 (en) * | 2000-02-29 | 2001-09-06 | Nobuyuki Takeda | Obstacle detection apparatus and method |
US20020191837A1 (en) * | 2001-05-23 | 2002-12-19 | Kabushiki Kaisha Toshiba | System and method for detecting obstacle |
US20030141965A1 (en) * | 2002-01-25 | 2003-07-31 | Altra Technologies Incorporated | Trailer based collision warning system and method |
US20030185421A1 (en) * | 2002-03-28 | 2003-10-02 | Kabushiki Kaisha Toshiba | Image processing apparatus and method |
US20040096082A1 (en) * | 2002-08-28 | 2004-05-20 | Hiroaki Nakai | Obstacle detection device and method therefor |
US6906620B2 (en) * | 2002-08-28 | 2005-06-14 | Kabushiki Kaisha Toshiba | Obstacle detection device and method therefor |
US7260243B2 (en) * | 2002-08-30 | 2007-08-21 | Fuji Jukogyo Kabushiki Kaisha | Intruding-object detection apparatus |
US20040252864A1 (en) * | 2003-06-13 | 2004-12-16 | Sarnoff Corporation | Method and apparatus for ground detection and removal in vision systems |
Cited By (44)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20090041337A1 (en) * | 2007-08-07 | 2009-02-12 | Kabushiki Kaisha Toshiba | Image processing apparatus and method |
EP2063404A1 (en) * | 2007-11-23 | 2009-05-27 | Traficon | A detector for detecting traffic participants. |
US20090214081A1 (en) * | 2008-02-25 | 2009-08-27 | Kabushiki Kaisha Toshiba | Apparatus and method for detecting object |
US8094884B2 (en) | 2008-02-25 | 2012-01-10 | Kabushiki Kaisha Toshiba | Apparatus and method for detecting object |
US20110091096A1 (en) * | 2008-05-02 | 2011-04-21 | Auckland Uniservices Limited | Real-Time Stereo Image Matching System |
EP2180426A1 (fr) * | 2008-10-24 | 2010-04-28 | Valeo Vision | Procédé de détection d'un objet cible pour véhicule automobile |
FR2937775A1 (fr) * | 2008-10-24 | 2010-04-30 | Valeo Vision Sas | Procede de detection d'un objet cible pour vehicule automobile |
US9275680B2 (en) | 2009-06-16 | 2016-03-01 | Microsoft Technology Licensing, Llc | Viewer-centric user interface for stereoscopic cinema |
US20100318914A1 (en) * | 2009-06-16 | 2010-12-16 | Microsoft Corporation | Viewer-centric user interface for stereoscopic cinema |
WO2010148154A3 (en) * | 2009-06-16 | 2011-03-03 | Microsoft Corporation | Viewer-centric user interface for stereoscopic cinema |
DE102009031650A1 (de) | 2009-07-03 | 2011-01-05 | Volkswagen Ag | Verfahren zur Erweiterung eines Kamerasystems, Kamerasystem, Fahrerassistenzsysem und entsprechendes Fahrzeug |
DE102009031650B4 (de) | 2009-07-03 | 2024-05-29 | Volkswagen Ag | Verfahren zur Erweiterung eines Kamerasystems, Kamerasystem, Fahrerassistenzsysem und entsprechendes Fahrzeug |
US20120075428A1 (en) * | 2010-09-24 | 2012-03-29 | Kabushiki Kaisha Toshiba | Image processing apparatus |
US10810762B2 (en) * | 2010-09-24 | 2020-10-20 | Kabushiki Kaisha Toshiba | Image processing apparatus |
US11553140B2 (en) | 2010-12-01 | 2023-01-10 | Magna Electronics Inc. | Vehicular vision system with multiple cameras |
US10868974B2 (en) * | 2010-12-01 | 2020-12-15 | Magna Electronics Inc. | Method for determining alignment of vehicular cameras |
US20140104393A1 (en) * | 2011-06-06 | 2014-04-17 | Panasonic Corporation | Calibration device and calibration method |
US9424645B2 (en) * | 2011-06-06 | 2016-08-23 | Panasonic Intellectual Property Management Co., Ltd. | Calibration device and calibration method for a stereo camera without placing physical markers |
US9495603B2 (en) * | 2011-09-07 | 2016-11-15 | Conti Temic Microelectronic Gmbh | Method for determining whether a vehicle can pass through an object by means of a 3-D camera |
US20140218481A1 (en) * | 2011-09-07 | 2014-08-07 | Continental Teves Ag & Co. Ohg | Method for Determining Whether a Vehicle can Pass Through an Object by Means of a 3-D Camera |
US20160171892A1 (en) * | 2012-02-24 | 2016-06-16 | Magna Electronics Inc. | Driver assistance system with path clearance determination |
US10147323B2 (en) * | 2012-02-24 | 2018-12-04 | Magna Electronics Inc. | Driver assistance system with path clearance determination |
US11079768B2 (en) * | 2012-09-13 | 2021-08-03 | Waymo Llc | Use of a reference image to detect a road obstacle |
US10678259B1 (en) * | 2012-09-13 | 2020-06-09 | Waymo Llc | Use of a reference image to detect a road obstacle |
US9122936B2 (en) | 2012-11-13 | 2015-09-01 | Kabushiki Kaisha Toshiba | Detecting device, detection method, and computer program product |
US9547805B1 (en) * | 2013-01-22 | 2017-01-17 | The Boeing Company | Systems and methods for identifying roads in images |
US20150098623A1 (en) * | 2013-10-09 | 2015-04-09 | Fujitsu Limited | Image processing apparatus and method |
CN106444837A (zh) * | 2016-10-17 | 2017-02-22 | 北京理工大学 | 一种无人机避障方法及系统 |
CN106503653A (zh) * | 2016-10-21 | 2017-03-15 | 深圳地平线机器人科技有限公司 | 区域标注方法、装置和电子设备 |
US20190220997A1 (en) * | 2018-01-16 | 2019-07-18 | Aisin Seiki Kabushiki Kaisha | Self-position estimation apparatus |
US10949996B2 (en) * | 2018-01-16 | 2021-03-16 | Aisin Seiki Kabushiki Kaisha | Self-position estimation apparatus |
EP3534299A1 (en) * | 2018-02-28 | 2019-09-04 | 2236008 Ontario, Inc. | Rapid ground-plane discrimination in stereoscopic images |
CN110211172A (zh) * | 2018-02-28 | 2019-09-06 | 2236008安大略有限公司 | 立体图像中的快速地平面区分 |
US11601635B2 (en) * | 2018-02-28 | 2023-03-07 | Blackberry Limited | Rapid ground-plane discrimination in stereoscopic images |
US20220103801A1 (en) * | 2018-02-28 | 2022-03-31 | Blackberry Limited | Rapid ground-plane discrimination in stereoscopic images |
CN108399398A (zh) * | 2018-03-22 | 2018-08-14 | 武汉云衡智能科技有限公司 | 一种基于深度学习的无人驾驶汽车障碍物识别检测方法 |
CN109074668A (zh) * | 2018-08-02 | 2018-12-21 | 深圳前海达闼云端智能科技有限公司 | 路径导航方法、相关装置及计算机可读存储介质 |
US11294053B2 (en) * | 2019-02-08 | 2022-04-05 | Aisin Seiki Kabushiki Kaisha | Object detection device |
CN111091049A (zh) * | 2019-11-01 | 2020-05-01 | 东南大学 | 一种基于反向特征匹配的路面障碍物检测方法 |
US20210366155A1 (en) * | 2020-05-20 | 2021-11-25 | Beijing Baidu Netcom Science And Technology Co., Ltd. . | Method and Apparatus for Detecting Obstacle |
US11688099B2 (en) * | 2020-05-20 | 2023-06-27 | Apollo Intelligent Connectivity (Beijing) Technology Co., Ltd. | Method and apparatus for detecting obstacle |
US20220114813A1 (en) * | 2020-12-25 | 2022-04-14 | Apollo Intelligent Connectivity (Beijing) Technology Co., Ltd. | Detecting obstacle |
US12125287B2 (en) * | 2020-12-25 | 2024-10-22 | Apollo Intelligent Connectivity (Beijing) Technology Co., Ltd. | Detecting obstacle |
US12132986B2 (en) * | 2021-12-12 | 2024-10-29 | Avanti R&D, Inc. | Computer vision system used in vehicles |
Also Published As
Publication number | Publication date |
---|---|
JP4406381B2 (ja) | 2010-01-27 |
JP2006053890A (ja) | 2006-02-23 |
US7660434B2 (en) | 2010-02-09 |
US20080285798A1 (en) | 2008-11-20 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
US7660434B2 (en) | Obstacle detection apparatus and a method therefor | |
US7151996B2 (en) | System and method for generating a model of the path of a roadway from an image recorded by a camera | |
US6990253B2 (en) | System and method for detecting obstacle | |
US8885049B2 (en) | Method and device for determining calibration parameters of a camera | |
US8259998B2 (en) | Image processing device for vehicle | |
US8180100B2 (en) | Plane detector and detecting method | |
JP5588812B2 (ja) | 画像処理装置及びそれを用いた撮像装置 | |
US20020134151A1 (en) | Apparatus and method for measuring distances | |
US8331653B2 (en) | Object detector | |
CN108692719B (zh) | 物体检测装置 | |
Wedel et al. | Realtime depth estimation and obstacle detection from monocular video | |
EP1727089A2 (en) | System and method for estimating ego-motion of a moving vehicle using successive images recorded along the vehicle's path of motion | |
JPH10187974A (ja) | 物流計測装置 | |
US20240070916A1 (en) | Vehicle and Control Method Thereof | |
Florez | Contributions by vision systems to multi-sensor object localization and tracking for intelligent vehicles |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
AS | Assignment |
Owner name: KABUSHIKI KAISHA TOSHIBA, JAPAN Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNOR:KUBOTA, SUSUMU;REEL/FRAME:016995/0320 Effective date: 20050722 |
|
STCB | Information on status: application discontinuation |
Free format text: ABANDONED -- FAILURE TO RESPOND TO AN OFFICE ACTION |