[go: up one dir, main page]
More Web Proxy on the site http://driver.im/

CN114782447A - Road surface detection method, device, vehicle, storage medium and chip - Google Patents

Road surface detection method, device, vehicle, storage medium and chip Download PDF

Info

Publication number
CN114782447A
CN114782447A CN202210712200.4A CN202210712200A CN114782447A CN 114782447 A CN114782447 A CN 114782447A CN 202210712200 A CN202210712200 A CN 202210712200A CN 114782447 A CN114782447 A CN 114782447A
Authority
CN
China
Prior art keywords
image
target
road surface
detection
camera parameter
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202210712200.4A
Other languages
Chinese (zh)
Other versions
CN114782447B (en
Inventor
冷汉超
俞昆
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Xiaomi Automobile Technology Co Ltd
Original Assignee
Xiaomi Automobile Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Xiaomi Automobile Technology Co Ltd filed Critical Xiaomi Automobile Technology Co Ltd
Priority to CN202210712200.4A priority Critical patent/CN114782447B/en
Publication of CN114782447A publication Critical patent/CN114782447A/en
Application granted granted Critical
Publication of CN114782447B publication Critical patent/CN114782447B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/0002Inspection of images, e.g. flaw detection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/23Clustering techniques
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/11Region-based segmentation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/20Analysis of motion
    • G06T7/269Analysis of motion using gradient-based methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/70Determining position or orientation of objects or cameras
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20112Image segmentation details
    • G06T2207/20164Salient point detection; Corner detection

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • General Physics & Mathematics (AREA)
  • Physics & Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Evolutionary Biology (AREA)
  • Evolutionary Computation (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • General Engineering & Computer Science (AREA)
  • Artificial Intelligence (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Multimedia (AREA)
  • Quality & Reliability (AREA)
  • Image Analysis (AREA)
  • Traffic Control Systems (AREA)
  • Image Processing (AREA)

Abstract

The disclosure relates to the field of automatic driving, in particular to a road surface detection method, a road surface detection device, a vehicle, a storage medium and a chip, wherein the road surface detection method comprises the steps of obtaining a first image of a first moment and a second image of a second moment of a target detection road surface, and a first camera parameter corresponding to the first image and a second camera parameter corresponding to the second image; performing ground position alignment processing on the first image according to the first camera parameter and the second camera parameter to obtain an aligned target image; determining a residual light flow diagram corresponding to the first image according to the second image and the target image; the detection result information of the target detection road surface is determined according to the residual light flow diagram, so that the detection result information of the target detection road surface is determined according to the residual light flow diagram, effective detection can be realized for unmarked obstacles, fine granularity detection can also be realized for road surface conditions, and the identification rate of the road surface obstacles can be effectively improved.

Description

Road surface detection method, device, vehicle, storage medium and chip
Technical Field
The present disclosure relates to the field of automatic driving technologies, and in particular, to a road surface detection method and apparatus, a vehicle, a storage medium, and a chip.
Background
With the increasing demand for autonomous driving, fine-grained detection of road surface conditions has been strongly demanded. In the related art, most of the road surface detection is performed based on a neural network model, because training of the neural network model usually requires a large amount of labeled data, and in the actual detection, only objects labeled in the training process can be detected, that is, only objects labeled in the training data can be detected, and objects that are not labeled in the training data are usually easy to miss detection.
Disclosure of Invention
To overcome the problems in the related art, the present disclosure provides a road surface detection method, apparatus, vehicle, storage medium, and chip.
According to a first aspect of an embodiment of the present disclosure, there is provided a road surface detection method including:
acquiring a first image of a target detection road surface at a first moment and a second image of a target detection road surface at a second moment, and a first camera parameter corresponding to the first image and a second camera parameter corresponding to the second image;
performing ground position alignment processing on the first image according to the first camera parameter and the second camera parameter to obtain an aligned target image;
acquiring a residual light flow diagram according to the second image and the target image;
and determining the detection result information of the target detection road surface according to the residual light flow graph.
Optionally, the performing ground position alignment processing on the first image according to the first camera parameter and the second camera parameter to obtain an aligned target image includes:
determining a target homography matrix according to the first camera parameters and the second camera parameters;
and carrying out homography transformation on the first image according to the target homography matrix to obtain the target image.
Optionally, before the determining, according to the residual light flow graph, detection result information of the target detection road surface, the method further includes:
determining the target offset of the camera along the designated direction from the first moment to the second moment according to the first camera parameter and the second camera parameter;
correspondingly, the determining the detection result information of the target detection road surface according to the residual light flow diagram includes:
determining a target ratio according to the target homography matrix, the target offset and the residual light flow graph, wherein the target ratio is the ratio of the road surface height in each pixel in the first image to the corresponding depth of the pixel;
determining a dense height map corresponding to the first image according to the target ratio corresponding to the first image;
and determining the detection result information in the first image according to the dense height map.
Optionally, the determining a dense height map corresponding to the first image according to the target ratio corresponding to the first image includes:
determining a target depth map corresponding to the first image according to the target ratio and the first camera parameter;
and determining the road surface height in each pixel in the first image according to the target depth map and the target ratio corresponding to each pixel.
Optionally, the determining the detection result information in the first image according to the dense height map includes:
determining a plurality of target pixels of which the road height is greater than a first threshold and smaller than a second threshold according to the road height corresponding to each pixel in the dense height map, wherein the first threshold is smaller than the second threshold;
and clustering the target pixels to obtain concave-convex areas on the road surface in the first image.
Optionally, the detection result information includes a concave-convex height of a concave-convex area in the target detection road surface, and the determination of the concave-convex condition information in the first image from the dense height map further includes:
acquiring the maximum road surface height in each concave-convex area;
taking the maximum road surface height in the concave-convex area as the concave-convex height of the concave-convex area.
Optionally, the obtaining a residual light flow map according to the second image and the target image includes:
and taking the second image and the target image as the input of a preset optical flow estimation model to obtain the residual optical flow graph output by the preset optical flow estimation model.
According to a second aspect of the embodiments of the present disclosure, there is provided a road surface detection device including:
the acquisition module is configured to acquire a first image of a target detection road surface at a first moment and a second image of the target detection road surface at a second moment, and a first camera parameter corresponding to the first image and a second camera parameter corresponding to the second image;
the alignment module is configured to perform ground position alignment processing on the first image according to the first camera parameter and the second camera parameter to obtain an aligned target image;
a first determination module configured to obtain a residual light flow map from the second image and the target image;
a second determination module configured to determine detection result information of the target detection road surface according to the residual light flow graph.
Optionally, the alignment module is configured to:
determining a target homography matrix according to the first camera parameters and the second camera parameters;
and performing homography transformation on the first image according to the target homography matrix to obtain the target image.
Optionally, the apparatus further comprises: a third determination module configured to determine a target offset of the camera in a specified direction from the first time to the second time according to the first camera parameter and the second camera parameter;
accordingly, the second determination module is configured to:
determining a target ratio according to the target homography matrix, the target offset and the residual light flow graph, wherein the target ratio is the ratio of the road surface height in each pixel in the first image to the corresponding depth of the pixel;
determining a dense height map corresponding to the first image according to the target ratio corresponding to the first image;
and determining the detection result information in the first image according to the dense height map.
Optionally, the second determining module is configured to:
determining a target depth map corresponding to the first image according to the target ratio and the first camera parameter;
and determining the road surface height in each pixel in the first image according to the target depth map and the target ratio corresponding to each pixel.
Optionally, the detection result information includes a concave-convex region, and the second determining module is configured to:
determining a plurality of target pixels of which the road surface height is larger than a first threshold value and smaller than a second threshold value according to the road surface height corresponding to each pixel in the dense height map, wherein the first threshold value is smaller than the second threshold value;
and clustering the target pixels to obtain concave-convex areas on the road surface in the first image.
Optionally, the detection result information includes a concave-convex height of a concave-convex area in the target detection road surface, and the second determination module is further configured to:
acquiring the maximum road surface height in each concave-convex area;
taking the maximum road surface height in the concave-convex area as the concave-convex height of the concave-convex area.
Optionally, the first determining module is configured to:
and taking the second image and the target image as the input of a preset optical flow estimation model to obtain the residual optical flow graph output by the preset optical flow estimation model.
According to a third aspect of the embodiments of the present disclosure, there is provided a vehicle including:
a first processor;
a memory for storing processor-executable instructions;
wherein the first processor is configured to:
acquiring a first image of a target detection road surface at a first moment and a second image of a target detection road surface at a second moment, and a first camera parameter corresponding to the first image and a second camera parameter corresponding to the second image;
performing ground position alignment processing on the first image according to the first camera parameter and the second camera parameter to obtain an aligned target image;
acquiring a residual light flow diagram according to the second image and the target image;
and determining the detection result information of the target detection road surface according to the residual light flow graph.
According to a fourth aspect of embodiments of the present disclosure, there is provided a computer readable storage medium having stored thereon computer program instructions which, when executed by a processor, implement the steps of the method of the first aspect described above.
According to a fifth aspect of embodiments of the present disclosure, there is provided a chip comprising a second processor and an interface; the second processor is for reading instructions to perform the method of the first aspect above.
The technical scheme provided by the embodiment of the disclosure can have the following beneficial effects:
the method comprises the steps that a first image of a road surface at a first moment and a second image of the road surface at a second moment can be detected through obtaining a target, and a first camera parameter corresponding to the first image and a second camera parameter corresponding to the second image; performing ground position alignment processing on the first image according to the first camera parameter and the second camera parameter to obtain an aligned target image; acquiring a residual light flow diagram according to the second image and the target image; the detection result information of the target detection road surface is determined according to the residual light flow diagram, so that the detection result information of the target detection road surface is determined according to the residual light flow diagram, effective detection can be realized for unmarked obstacles, fine granularity detection can also be realized for road surface conditions, and the identification rate of the road surface obstacles can be effectively improved.
It is to be understood that both the foregoing general description and the following detailed description are exemplary and explanatory only and are not restrictive of the disclosure.
Drawings
The accompanying drawings, which are incorporated in and constitute a part of this specification, illustrate embodiments consistent with the present disclosure and, together with the description, serve to explain the principles of the disclosure.
FIG. 1 is a flow chart illustrating a method of road surface detection according to an exemplary embodiment;
FIG. 2 is a flow chart of a method of detecting a road surface according to the embodiment shown in FIG. 1;
FIG. 3 is a schematic diagram of a dense height map shown in an exemplary embodiment of the present disclosure;
fig. 4 is a block diagram illustrating a road surface detection device according to an exemplary embodiment.
Detailed Description
Reference will now be made in detail to the exemplary embodiments, examples of which are illustrated in the accompanying drawings. When the following description refers to the accompanying drawings, like numbers in different drawings represent the same or similar elements unless otherwise indicated. The implementations described in the exemplary embodiments below do not represent all implementations consistent with the present disclosure. Rather, they are merely examples of apparatus and methods consistent with certain aspects of the disclosure, as detailed in the appended claims.
It should be noted that all actions of acquiring signals, information or data in the present application are performed under the premise of complying with the corresponding data protection regulation policy of the country of the location and obtaining the authorization given by the owner of the corresponding device.
Fig. 1 is a flowchart illustrating a road surface detection method according to an exemplary embodiment, which may include the following steps, as shown in fig. 1.
In step 101, a first image at a first time and a second image at a second time of detecting a road surface by an object, and a first camera parameter corresponding to the first image and a second camera parameter corresponding to the second image are obtained.
The first time and the second time may be two adjacent sampling times, where the first time is a previous sampling time, and the second time is a next sampling time. The first camera parameter is camera internal parameter and camera external parameter when acquiring the first image, the second camera parameter is camera internal parameter and camera external parameter when acquiring the second image, the camera internal parameter is used for describing attributes such as camera focal length, camera center and the like, and the camera external parameter is used for describing a series of rotation and translation operations, generally comprising a rotation matrix and a translation vector.
In step 102, the ground position alignment processing is performed on the first image according to the first camera parameter and the second camera parameter to obtain an aligned target image.
In this step, a target homography matrix may be determined according to the first camera parameter and the second camera parameter; and then carrying out homography transformation on the first image according to the target homography matrix to obtain the target image.
It should be noted that the target homography matrix can be obtained by calculating according to the first camera parameter and the second camera parameter by the following formula:
Figure DEST_PATH_IMAGE001
in the above formula 1, H is the homography matrix of the target, R is the rotation moment from the first moment to the second moment in the pose of the cameraK is a camera internal reference matrix, T is a translation vector in the camera pose from the first time to the second time,
Figure 746777DEST_PATH_IMAGE002
is a normal vector of the ground surface,
Figure DEST_PATH_IMAGE003
is the camera height.
In step 103, a residual light flow map is obtained from the second image and the target image.
In this step, the second image and the target image may be used as inputs of a preset optical flow estimation model to obtain the residual optical flow graph output by the preset optical flow estimation model.
The preset optical flow estimation model may be any one of the existing optical flow estimation models, for example, the preset optical flow estimation model may be an optical flow estimation model based on PWC-Net (Pyramid, Warping, and Cost volume, image Pyramid, Warping, and basis quantity) algorithm, or an optical flow estimation model based on RAFT algorithm.
In step 104, the detection result information of the target detection road surface is determined according to the residual light flow diagram.
The detection result information may include positions of the recesses and the protrusions, a depth of the recesses, and a height of the protrusions, among others.
According to the technical scheme, the detection result information of the target detection road surface can be determined according to the residual light flow diagram, effective detection can be realized for unmarked obstacles, fine granularity detection can also be realized for the road surface condition, and therefore the identification rate of the road surface obstacles can be effectively improved.
FIG. 2 is a flow chart of a method of detecting a road surface according to the embodiment shown in FIG. 1; as shown in fig. 2, the road surface detection method may further include step 1041;
in step 1041, a target offset of the camera in a designated direction from the first time to the second time is determined according to the first camera parameter and the second camera parameter.
The specified direction may be a Z-axis direction, an X-axis direction or a Y-axis direction in a world coordinate system, the camera extrinsic parameters corresponding to the first camera parameters include distances between the camera and the Z-axis direction, the X-axis direction and the Y-axis at the first time, and the camera extrinsic parameters corresponding to the second camera parameters include distances between the camera and the Z-axis direction, the X-axis direction and the Y-axis at the second time. And obtaining the target offset of the camera along the designated direction according to the camera external participation of the camera at the first moment and the second moment.
The determination of the detection result information of the target detection road surface according to the residual light flow diagram shown in the above step 104 in fig. 1 can be implemented by the following steps 1042 to 1044:
in step 1042, a target ratio is determined according to the target homography matrix, the target offset and the residual light flow graph.
The target ratio is a ratio of a road surface height in each pixel in the first image to a corresponding depth of the pixel.
It should be noted that the target ratio can be calculated by the following formula:
Figure 867180DEST_PATH_IMAGE004
in the above-mentioned formula 2, the first,
Figure DEST_PATH_IMAGE005
in order to achieve the target ratio,
Figure 850179DEST_PATH_IMAGE006
for the residual streamer corresponding to that pixel,
Figure DEST_PATH_IMAGE007
is the height of the camera from the ground,
Figure 963497DEST_PATH_IMAGE008
is the offset of the camera along the Z-axis (i.e. the target offset) from the first time to the second time,
Figure DEST_PATH_IMAGE009
is a pixel coordinate matrix corresponding to the pixels in the first image
Figure 512290DEST_PATH_IMAGE010
Figure DEST_PATH_IMAGE011
Is the width of the first image and is,
Figure 803594DEST_PATH_IMAGE012
is the height of the first image;
Figure 805049DEST_PATH_IMAGE013
is the third component of the target homography matrix if the target homography matrix H is
Figure DEST_PATH_IMAGE014
Then it is to
Figure 472790DEST_PATH_IMAGE015
Is composed of
Figure DEST_PATH_IMAGE016
In step 1043, a dense height map corresponding to the first image is determined according to the target ratio corresponding to the first image.
This dense height map can be obtained by the following steps shown in S1 and S2.
And S1, determining a target depth map corresponding to the first image according to the target ratio and the first camera parameter.
Wherein, the depth value corresponding to each pixel can be obtained by the following formula 3, so as to obtain the target depth map corresponding to the first image:
Figure 626822DEST_PATH_IMAGE017
in the above-mentioned formula 3, the first,
Figure DEST_PATH_IMAGE018
z is the depth value corresponding to each pixel in the first image,
Figure DEST_PATH_IMAGE020
is a normal vector of the ground surface,
Figure 89028DEST_PATH_IMAGE021
is the camera height, K is the camera reference, p is the pixel coordinate matrix corresponding to the pixel in the second image
Figure DEST_PATH_IMAGE022
x = 0,1,2···width 2 y =0, 1,2···height 2width 2Is the width of the second image and,height 2is the height of the second image.
And S2, determining the road surface height in each pixel in the first image according to the target depth map and the target ratio corresponding to each pixel.
It should be noted that the road surface height in each pixel can be obtained by the following formula 4:
Figure 577778DEST_PATH_IMAGE023
wherein,h pis the road surface height corresponding to each pixel in the first image, Z is the depth value corresponding to the pixel,
Figure DEST_PATH_IMAGE024
is the target ratio. The dense height map may be obtained by obtaining the road height in each pixel in the first image and representing the road height in each pixel by an image, as shown in fig. 3, where fig. 3 is a schematic diagram of a dense height map shown in an exemplary embodiment of the present disclosure, and in fig. 3, the lower diagram is the dense height map of the upper diagram.
In step 1044, the detection result information in the first image is determined according to the dense height map.
In this step, the detection result information includes the positions of the convex and concave areas, that is, the positions of the convex and concave areas, and a plurality of target pixels, whose road height is greater than a first threshold and smaller than a second threshold, may be determined according to the road height corresponding to each pixel in the dense height map, where the first threshold is smaller than the second threshold; and clustering the target pixels to obtain concave-convex areas on the road surface in the first image.
It should be noted that the clustering process may use a clustering algorithm in the prior art, and the road height and position coordinates of each target pixel are used as input, so that the clustering algorithm outputs one or more clusters, and the concave-convex area is obtained according to the coordinate positions of the target pixels in the clusters.
According to the technical scheme, a first image of a road surface at a first moment and a second image of the road surface at a second moment can be detected by acquiring a first camera parameter corresponding to the first image and a second camera parameter corresponding to the second image; performing ground position alignment processing on the first image according to the first camera parameter and the second camera parameter to obtain an aligned target image; acquiring a residual light flow diagram according to the second image and the target image; the detection result information of the target detection road surface is determined according to the residual light flow diagram, so that effective detection can be realized for the unmarked obstacles, and fine granularity detection can also be realized for the road surface condition, and the identification rate of the road surface obstacles can be effectively improved.
Optionally, the detection result information may further include concave-convex heights of concave-convex regions in the target detection road surface, and after the concave-convex regions are obtained, the maximum road surface height in each concave-convex region may also be obtained; the maximum road surface height in the concave-convex area is taken as the concave-convex height of the concave-convex area.
The maximum road surface height may be the highest raised road surface height, or the deepest depression depth.
Above technical scheme not only can effectively detect the concave-convex area of more meticulous granularity, can also effectively detect this concave-convex height of concave-convex area in the road surface to can provide reliable data basis for follow-up vehicle control.
FIG. 4 is a block diagram illustrating a road surface detecting device according to an exemplary embodiment; as shown in fig. 4, the road surface detection device may include:
an obtaining module 401 configured to obtain a first image at a first time and a second image at a second time of detecting a road surface by a target, and a first camera parameter corresponding to the first image and a second camera parameter corresponding to the second image;
an alignment module 402 configured to perform ground position alignment processing on the first image according to the first camera parameter and the second camera parameter to obtain an aligned target image;
a first determining module 403 configured to obtain a residual light flow map from the second image and the target image;
a second determining module 404 configured to determine detection result information of the target detection road surface according to the residual light flow graph.
According to the technical scheme, the detection result information of the target detection road surface can be determined according to the residual light flow diagram, effective detection can be realized for unmarked obstacles, fine granularity detection can also be realized for the road surface condition, and therefore the identification rate of the road surface obstacles can be effectively improved.
Optionally, the alignment module 402 is configured to:
determining a target homography matrix according to the first camera parameters and the second camera parameters;
and performing homography transformation on the first image according to the target homography matrix to obtain the target image.
Optionally, the apparatus further comprises: a third determination module configured to determine a target offset of the camera in a specified direction from the first time to the second time according to the first camera parameter and the second camera parameter;
accordingly, the second determination module 404 is configured to: determining a target ratio according to the target homography matrix, the target offset and the residual light flow graph, wherein the target ratio is the ratio of the road surface height in each pixel in the first image to the corresponding depth of the pixel;
determining a dense height map corresponding to the first image according to the target ratio corresponding to the first image;
and determining the detection result information in the first image according to the dense height map.
Optionally, the second determining module 404 is configured to:
determining a target depth map corresponding to the first image according to the target ratio and the first camera parameter;
and determining the road surface height in each pixel in the first image according to the target depth map and the target ratio corresponding to each pixel.
Optionally, the detection result information includes a concave-convex area, and the second determining module 404 is configured to:
determining a plurality of target pixels of which the road height is greater than a first threshold and less than a second threshold according to the road height corresponding to each pixel in the dense height map, wherein the first threshold is less than the second threshold;
and clustering the target pixels to obtain concave-convex areas on the road surface in the first image.
Optionally, the detection result information includes a concave-convex height of a concave-convex area in the target detection road surface, and the second determining module 404 is further configured to:
acquiring the maximum road surface height in each concave-convex area;
the maximum road surface height in the concave-convex area is taken as the concave-convex height of the concave-convex area.
Optionally, the first determining module 403 is configured to:
and taking the second image and the target image as the input of a preset optical flow estimation model to obtain the residual optical flow diagram output by the preset optical flow estimation model.
Above technical scheme not only can effectively detect the concave-convex area of more meticulous granularity, can also effectively detect this concave-convex height of concave-convex area in the road surface to can provide reliable data basis for follow-up vehicle control.
With regard to the apparatus in the above-described embodiment, the specific manner in which each module performs the operation has been described in detail in the embodiment related to the method, and will not be elaborated here.
In another exemplary embodiment of the present disclosure, a vehicle is provided, including:
a first processor;
a memory for storing processor-executable instructions;
wherein the first processor is configured to:
acquiring a first image of a target detection road surface at a first moment and a second image of a target detection road surface at a second moment, and a first camera parameter corresponding to the first image and a second camera parameter corresponding to the second image;
performing ground position alignment processing on the first image according to the first camera parameter and the second camera parameter to obtain an aligned target image;
acquiring a residual light flow diagram according to the second image and the target image;
and determining the detection result information of the target detection road surface according to the residual light flow graph.
The present disclosure also provides a computer-readable storage medium having stored thereon computer program instructions which, when executed by a processor, implement the steps of the road surface detection method provided by the present disclosure.
In another exemplary embodiment, a computer program product is also provided, which comprises a computer program executable by a programmable apparatus, the computer program having code portions for performing the above-described road surface detection method when executed by the programmable apparatus, which computer program product may be a chip.
Other embodiments of the disclosure will be apparent to those skilled in the art from consideration of the specification and practice of the disclosure. This application is intended to cover any variations, uses, or adaptations of the disclosure following, in general, the principles of the disclosure and including such departures from the present disclosure as come within known or customary practice within the art to which the disclosure pertains. It is intended that the specification and examples be considered as exemplary only, with a true scope and spirit of the disclosure being indicated by the following claims.
It will be understood that the present disclosure is not limited to the precise arrangements described above and shown in the drawings and that various modifications and changes may be made without departing from the scope thereof. The scope of the present disclosure is limited only by the appended claims.

Claims (11)

1. A road surface detection method is characterized by comprising the following steps:
acquiring a first image of a target detection road surface at a first moment and a second image of the target detection road surface at a second moment, and a first camera parameter corresponding to the first image and a second camera parameter corresponding to the second image;
performing ground position alignment processing on the first image according to the first camera parameter and the second camera parameter to obtain an aligned target image;
acquiring a residual light flow diagram according to the second image and the target image;
and determining the detection result information of the target detection road surface according to the residual light flow graph.
2. The method for detecting a road surface according to claim 1, wherein the performing ground position alignment processing on the first image according to the first camera parameter and the second camera parameter to obtain an aligned target image includes:
determining a target homography matrix according to the first camera parameters and the second camera parameters;
and performing homography transformation on the first image according to the target homography matrix to obtain the target image.
3. The road surface detection method according to claim 2, characterized in that before the determining of the detection result information of the target detected road surface according to the residual light flow diagram, the method further comprises:
determining the target offset of the camera along the designated direction from the first moment to the second moment according to the first camera parameter and the second camera parameter;
correspondingly, the determining the detection result information of the target detection road surface according to the residual light flow diagram includes:
determining a target ratio according to the target homography matrix, the target offset and the residual light flow graph, wherein the target ratio is the ratio of the road surface height in each pixel in the first image to the corresponding depth of the pixel;
determining a dense height map corresponding to the first image according to the target ratio corresponding to the first image;
and determining the detection result information in the first image according to the dense height map.
4. The road surface detection method according to claim 3, wherein the determining a dense height map corresponding to the first image according to the target ratio corresponding to the first image includes:
determining a target depth map corresponding to the first image according to the target ratio and the first camera parameter;
and determining the road surface height in each pixel in the first image according to the target depth map and the target ratio corresponding to each pixel.
5. The road surface detection method according to claim 3, wherein the detection result information includes an uneven area, and the determining the detection result information in the first image from the dense height map includes:
determining a plurality of target pixels of which the road surface height is larger than a first threshold value and smaller than a second threshold value according to the road surface height corresponding to each pixel in the dense height map, wherein the first threshold value is smaller than the second threshold value;
and clustering the target pixels to obtain concave-convex areas on the road surface in the first image.
6. The road surface detection method according to claim 5, characterized in that the detection result information includes an irregularity height of an irregularity region in the target detection road surface, and the determination of the irregularity condition information in the first image from the dense height map further includes:
acquiring the maximum road surface height in each concave-convex area;
taking the maximum road surface height in the concave-convex area as the concave-convex height of the concave-convex area.
7. A road surface detection method according to any one of claims 1 to 6, characterised in that said obtaining a residual light map from said second image and said target image comprises:
and taking the second image and the target image as the input of a preset optical flow estimation model to obtain the residual optical flow graph output by the preset optical flow estimation model.
8. A road surface detection device characterized by comprising:
the acquisition module is configured to acquire a first image of a target detection road surface at a first moment and a second image of the target detection road surface at a second moment, and a first camera parameter corresponding to the first image and a second camera parameter corresponding to the second image;
the alignment module is configured to perform ground position alignment processing on the first image according to the first camera parameter and the second camera parameter to obtain an aligned target image;
a first determining module configured to obtain a residual light flow map from the second image and the target image;
a second determination module configured to determine detection result information of the target detection road surface according to the residual light flow graph.
9. A vehicle, characterized by comprising:
a first processor;
a memory for storing processor-executable instructions;
wherein the first processor is configured to:
acquiring a first image of a target detection road surface at a first moment and a second image of a target detection road surface at a second moment, and a first camera parameter corresponding to the first image and a second camera parameter corresponding to the second image;
performing ground position alignment processing on the first image according to the first camera parameter and the second camera parameter to obtain an aligned target image;
acquiring a residual light flow diagram according to the second image and the target image;
and determining the detection result information of the target detection road surface according to the residual light flow diagram.
10. A computer-readable storage medium, on which computer program instructions are stored, which program instructions, when executed by a processor, carry out the steps of the method according to any one of claims 1 to 7.
11. A chip comprising a second processor and an interface; the second processor is to read instructions to perform the method of any one of claims 1-7.
CN202210712200.4A 2022-06-22 2022-06-22 Road surface detection method, device, vehicle, storage medium and chip Active CN114782447B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202210712200.4A CN114782447B (en) 2022-06-22 2022-06-22 Road surface detection method, device, vehicle, storage medium and chip

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202210712200.4A CN114782447B (en) 2022-06-22 2022-06-22 Road surface detection method, device, vehicle, storage medium and chip

Publications (2)

Publication Number Publication Date
CN114782447A true CN114782447A (en) 2022-07-22
CN114782447B CN114782447B (en) 2022-09-09

Family

ID=82422520

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202210712200.4A Active CN114782447B (en) 2022-06-22 2022-06-22 Road surface detection method, device, vehicle, storage medium and chip

Country Status (1)

Country Link
CN (1) CN114782447B (en)

Citations (19)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2008033781A (en) * 2006-07-31 2008-02-14 Toyota Motor Corp Road surface gradient detection device and image display device
JP2009139324A (en) * 2007-12-10 2009-06-25 Mazda Motor Corp Travel road surface detecting apparatus for vehicle
CN102999759A (en) * 2012-11-07 2013-03-27 东南大学 Light stream based vehicle motion state estimating method
CN103236160A (en) * 2013-04-07 2013-08-07 水木路拓科技(北京)有限公司 Road network traffic condition monitoring system based on video image processing technology
US20130242102A1 (en) * 2011-04-13 2013-09-19 Nissan Motor Co., Ltd. Driving assistance device and method of detecting vehicle adjacent thereto
CN103366158A (en) * 2013-06-27 2013-10-23 东南大学 Three dimensional structure and color model-based monocular visual road face detection method
WO2019007258A1 (en) * 2017-07-07 2019-01-10 腾讯科技(深圳)有限公司 Method, apparatus and device for determining camera posture information, and storage medium
CN109685732A (en) * 2018-12-18 2019-04-26 重庆邮电大学 A kind of depth image high-precision restorative procedure captured based on boundary
WO2019156072A1 (en) * 2018-02-06 2019-08-15 株式会社デンソー Attitude estimating device
CN110235026A (en) * 2017-01-26 2019-09-13 御眼视觉技术有限公司 The automobile navigation of image and laser radar information based on alignment
WO2019174377A1 (en) * 2018-03-14 2019-09-19 大连理工大学 Monocular camera-based three-dimensional scene dense reconstruction method
CN111595334A (en) * 2020-04-30 2020-08-28 东南大学 Indoor autonomous positioning method based on tight coupling of visual point-line characteristics and IMU (inertial measurement Unit)
WO2021073656A1 (en) * 2019-10-16 2021-04-22 上海商汤临港智能科技有限公司 Method for automatically labeling image data and device
CN112700486A (en) * 2019-10-23 2021-04-23 阿里巴巴集团控股有限公司 Method and device for estimating depth of road lane line in image
CN112784671A (en) * 2019-11-08 2021-05-11 三菱电机株式会社 Obstacle detection device and obstacle detection method
CN113592940A (en) * 2021-07-28 2021-11-02 北京地平线信息技术有限公司 Method and device for determining position of target object based on image
CN113822260A (en) * 2021-11-24 2021-12-21 杭州蓝芯科技有限公司 Obstacle detection method and apparatus based on depth image, electronic device, and medium
CN113819890A (en) * 2021-06-04 2021-12-21 腾讯科技(深圳)有限公司 Distance measuring method, distance measuring device, electronic equipment and storage medium
CN113887400A (en) * 2021-09-29 2022-01-04 北京百度网讯科技有限公司 Obstacle detection method, model training method and device and automatic driving vehicle

Patent Citations (19)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2008033781A (en) * 2006-07-31 2008-02-14 Toyota Motor Corp Road surface gradient detection device and image display device
JP2009139324A (en) * 2007-12-10 2009-06-25 Mazda Motor Corp Travel road surface detecting apparatus for vehicle
US20130242102A1 (en) * 2011-04-13 2013-09-19 Nissan Motor Co., Ltd. Driving assistance device and method of detecting vehicle adjacent thereto
CN102999759A (en) * 2012-11-07 2013-03-27 东南大学 Light stream based vehicle motion state estimating method
CN103236160A (en) * 2013-04-07 2013-08-07 水木路拓科技(北京)有限公司 Road network traffic condition monitoring system based on video image processing technology
CN103366158A (en) * 2013-06-27 2013-10-23 东南大学 Three dimensional structure and color model-based monocular visual road face detection method
CN110235026A (en) * 2017-01-26 2019-09-13 御眼视觉技术有限公司 The automobile navigation of image and laser radar information based on alignment
WO2019007258A1 (en) * 2017-07-07 2019-01-10 腾讯科技(深圳)有限公司 Method, apparatus and device for determining camera posture information, and storage medium
WO2019156072A1 (en) * 2018-02-06 2019-08-15 株式会社デンソー Attitude estimating device
WO2019174377A1 (en) * 2018-03-14 2019-09-19 大连理工大学 Monocular camera-based three-dimensional scene dense reconstruction method
CN109685732A (en) * 2018-12-18 2019-04-26 重庆邮电大学 A kind of depth image high-precision restorative procedure captured based on boundary
WO2021073656A1 (en) * 2019-10-16 2021-04-22 上海商汤临港智能科技有限公司 Method for automatically labeling image data and device
CN112700486A (en) * 2019-10-23 2021-04-23 阿里巴巴集团控股有限公司 Method and device for estimating depth of road lane line in image
CN112784671A (en) * 2019-11-08 2021-05-11 三菱电机株式会社 Obstacle detection device and obstacle detection method
CN111595334A (en) * 2020-04-30 2020-08-28 东南大学 Indoor autonomous positioning method based on tight coupling of visual point-line characteristics and IMU (inertial measurement Unit)
CN113819890A (en) * 2021-06-04 2021-12-21 腾讯科技(深圳)有限公司 Distance measuring method, distance measuring device, electronic equipment and storage medium
CN113592940A (en) * 2021-07-28 2021-11-02 北京地平线信息技术有限公司 Method and device for determining position of target object based on image
CN113887400A (en) * 2021-09-29 2022-01-04 北京百度网讯科技有限公司 Obstacle detection method, model training method and device and automatic driving vehicle
CN113822260A (en) * 2021-11-24 2021-12-21 杭州蓝芯科技有限公司 Obstacle detection method and apparatus based on depth image, electronic device, and medium

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
吴国星 等: "基于多传感器融合的路面车辆检测", 《华中科技大学学报(自然科学版)》 *

Also Published As

Publication number Publication date
CN114782447B (en) 2022-09-09

Similar Documents

Publication Publication Date Title
Romero-Ramirez et al. Speeded up detection of squared fiducial markers
CN108229475B (en) Vehicle tracking method, system, computer device and readable storage medium
CN108805016B (en) Head and shoulder area detection method and device
CN112598922B (en) Parking space detection method, device, equipment and storage medium
CN109583365A (en) Method for detecting lane lines is fitted based on imaging model constraint non-uniform B-spline curve
CN112101205B (en) Training method and device based on multi-task network
CN109961013A (en) Recognition methods, device, equipment and the computer readable storage medium of lane line
CN110992424B (en) Positioning method and system based on binocular vision
CN111967396A (en) Processing method, device and equipment for obstacle detection and storage medium
CN116994236A (en) Low-quality image license plate detection method based on deep neural network
CN112784639A (en) Intersection detection, neural network training and intelligent driving method, device and equipment
CN112184723B (en) Image processing method and device, electronic equipment and storage medium
CN114782447B (en) Road surface detection method, device, vehicle, storage medium and chip
US12110009B2 (en) Parking space detection method and system
CN115115704B (en) Method and device for determining vehicle pose information
CN117444450A (en) Welding seam welding method, electronic equipment and storage medium
CN113723432B (en) Intelligent identification and positioning tracking method and system based on deep learning
CN110864670A (en) Method and system for acquiring position of target obstacle
CN115240150A (en) Lane departure warning method, system, device and medium based on monocular camera
Zhang et al. Target tracking for mobile robot platforms via object matching and background anti-matching
CN112529943A (en) Object detection method, object detection device and intelligent equipment
CN118096838B (en) Fish track tracking method based on Kalman filter
CN113643374A (en) Multi-view camera calibration method, device, equipment and medium based on road characteristics
CN110363235A (en) A kind of high-definition picture matching process and system
CN118172423B (en) Sequential point cloud data pavement element labeling method and device and electronic equipment

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant