[go: up one dir, main page]
More Web Proxy on the site http://driver.im/

WO2020133172A1 - 图像处理方法、设备及计算机可读存储介质 - Google Patents

图像处理方法、设备及计算机可读存储介质 Download PDF

Info

Publication number
WO2020133172A1
WO2020133172A1 PCT/CN2018/124726 CN2018124726W WO2020133172A1 WO 2020133172 A1 WO2020133172 A1 WO 2020133172A1 CN 2018124726 W CN2018124726 W CN 2018124726W WO 2020133172 A1 WO2020133172 A1 WO 2020133172A1
Authority
WO
WIPO (PCT)
Prior art keywords
image
matrix
head
rotation
target
Prior art date
Application number
PCT/CN2018/124726
Other languages
English (en)
French (fr)
Inventor
崔健
Original Assignee
深圳市大疆创新科技有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 深圳市大疆创新科技有限公司 filed Critical 深圳市大疆创新科技有限公司
Priority to CN201880068957.6A priority Critical patent/CN111279354B/zh
Priority to PCT/CN2018/124726 priority patent/WO2020133172A1/zh
Publication of WO2020133172A1 publication Critical patent/WO2020133172A1/zh

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/50Context or environment of the image
    • G06V20/56Context or environment of the image exterior to a vehicle by using sensors mounted on the vehicle
    • G06V20/588Recognition of the road, e.g. of lane markings; Recognition of the vehicle driving pattern in relation to the road
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T3/00Geometric image transformations in the plane of the image
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T3/00Geometric image transformations in the plane of the image
    • G06T3/18Image warping, e.g. rearranging pixels individually
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/50Context or environment of the image
    • G06V20/56Context or environment of the image exterior to a vehicle by using sensors mounted on the vehicle
    • G06V20/58Recognition of moving objects or obstacles, e.g. vehicles or pedestrians; Recognition of traffic objects, e.g. traffic signs, traffic lights or roads
    • G06V20/582Recognition of moving objects or obstacles, e.g. vehicles or pedestrians; Recognition of traffic objects, e.g. traffic signs, traffic lights or roads of traffic signs

Definitions

  • Embodiments of the present invention relate to the field of image processing technology, and in particular, to an image processing method, device, and computer-readable storage medium.
  • lane line algorithms play an important role.
  • the accuracy of lane line algorithms will directly affect the performance and reliability of the system.
  • the lane line algorithm is automatic driving An important prerequisite for car control.
  • the lane line algorithm is divided into two levels, one is the detection of the lane line, and the other is the positioning of the lane line, which is to calculate the actual positional relationship between the lane line and the car.
  • the traditional lane line detection algorithm can collect the head-up image through the shooting device, and use the head-up image to detect the lane line.
  • the traditional lane line positioning algorithm can collect the head-up image through the shooting device, and use the head-up image to locate the lane line.
  • the detection result is inaccurate.
  • the size and nature of the lane line in the head-up image are all through perspective projection, which has the effect of "near big and far small", resulting in distant Some pavement markers are distorted in shape and cannot be detected correctly.
  • the positioning result is not accurate.
  • the shape and size of the road surface marker in the head-up image are coupled with the positional relationship between the internal parameters of the camera, the camera and the road surface, It is impossible to directly know the actual position of the lane line by looking directly at the position in the image.
  • the invention provides an image processing method, device and computer-readable storage medium, which can improve the accuracy of detection of lane lines and accurately locate the actual positional relationship between lane lines and vehicles.
  • a driving assistance device including at least one photographing device, a processor, and a memory; the driving assistance device is provided on a vehicle and communicates with the vehicle; the memory, For storing computer instructions executable by the processor;
  • the photographing device is configured to collect a head-up image including a target object, and send the head-up image including the target object to the processor;
  • the processor is configured to read computer instructions from the memory to implement:
  • the head-up image is converted into a top-down image according to the relative posture.
  • a vehicle equipped with a driving assistance system includes at least one camera, a processor, and a memory.
  • the memory is used to store computer instructions executable by the processor.
  • the shooting device is used to collect a head-up image containing a target object, and send the head-up image containing a target object to the processor;
  • the processor is configured to read computer instructions from the memory to implement:
  • the head-up image is converted into a top-down image according to the relative posture.
  • an image processing method is provided, which is applied to a driving assistance system.
  • the driving assistance system includes at least one photographing device.
  • the method includes:
  • the head-up image is converted into a top-down image according to the relative posture.
  • a computer-readable storage medium is provided.
  • Computer instructions are stored on the computer-readable storage medium. When the computer instructions are executed, the above method is implemented.
  • the accuracy of the lane line detection can be improved, and the actual positional relationship between the lane line and the vehicle can be accurately located.
  • the head-up image can be converted into a bird's-eye view image, and the bird's-eye view image can be used to detect the lane line, thereby improving the accuracy of the lane line detection result.
  • the head-up image can be converted into a bird's-eye view image, and the bird's-eye view image is used to locate the lane line, thereby improving the accuracy of the lane line positioning result and accurately knowing the actual position of the lane line.
  • FIG. 1 is a schematic diagram of an example of an image processing method in an embodiment
  • FIG. 2 is a schematic diagram of an example of an image processing method in another embodiment
  • FIG. 3 is a schematic diagram of an example of an image processing method in another embodiment
  • 4A is a schematic diagram of a head-up image and a top-down image of an image processing method in an embodiment
  • 4B is a schematic diagram of the relationship between the target object, the space plane and the camera in an embodiment
  • FIG. 5 is a block diagram of an example of a driving assistance device in an embodiment.
  • first, second, third, etc. may be used to describe various information in the present invention, the information should not be limited to these terms. These terms are used to distinguish the same type of information from each other.
  • first information may also be referred to as second information, and similarly, the second information may also be referred to as first information.
  • word “if” can be interpreted as "when", or "when”, or "in response to a determination”.
  • An embodiment of the present invention proposes an image processing method, which can be applied to a driving assistance system, and the driving assistance system may include at least one photographing device.
  • the driving assistance system may be mounted on a mobile platform (such as unmanned vehicles, ordinary vehicles, etc.), or the driving assistance system may also be mounted on driving assistance equipment (such as ADAS equipment, etc.), and the driving assistance equipment It is installed on a mobile platform (such as unmanned vehicles, ordinary vehicles, etc.).
  • a mobile platform such as unmanned vehicles, ordinary vehicles, etc.
  • driving assistance equipment such as ADAS equipment, etc.
  • the driving assistance equipment It is installed on a mobile platform (such as unmanned vehicles, ordinary vehicles, etc.).
  • the above is only an example of two application scenarios, and the driving assistance system can also be carried on other vehicles, which is not limited.
  • the method may include:
  • Step 101 Obtain a head-up image containing a target object through a camera.
  • the at least one shooting device is installed on the mobile platform, and at least one of the front, rear, left, or right of the mobile platform can be acquired through the shooting device
  • the head-up image of the direction, the head-up image contains the target object.
  • the at least one imaging device is provided in the driving assistance device, and at least one of the front, rear, left, or right directions of the driving assistance device can be acquired through the imaging device
  • the head-up image contains the target object.
  • Step 102 Determine a space plane corresponding to the target object.
  • the first posture information of the mobile platform (that is, the current posture information of the mobile platform) may be acquired, and the space plane may be determined according to the first posture information.
  • the space plane refers to the position plane of the target object (such as road surface or ground) in the world coordinate system, that is, the position of the space plane in the world coordinate system.
  • the second posture information of the driving assistance device (that is, the current posture information of the driving assistance device) may be acquired, and the space plane may be determined according to the second posture information.
  • the space plane refers to the position plane of the target object (such as road surface or ground) in the world coordinate system, that is, the position of the space plane in the world coordinate system.
  • Step 103 Determine the relative posture of the space plane and the shooting device.
  • the relative posture refers to the relative posture of the shooting device relative to the space plane (such as road surface or ground), and can also be understood as the external parameter (that is, positional relationship) of the shooting device relative to the space plane .
  • the relative posture may include, but is not limited to: a pitch angle of the camera relative to the space plane, a roll angle of the camera relative to the space plane, and the camera relative to space The yaw of the plane, the height of the camera relative to the space plane, and the translation parameter of the camera relative to the space plane.
  • Step 104 Convert the head-up image to the top-down image according to the relative posture.
  • the projection matrix corresponding to the head-up image can be obtained according to the relative pose; for example, the target rotation matrix can be determined according to the relative pose, the target rotation parameter can be obtained according to the target rotation matrix, and the relative pose and the target rotation parameter can be obtained The projection matrix corresponding to the head-up image. Then, the head-up image can be converted into a bird's-eye view image according to the projection matrix.
  • the relative attitude includes the rotation angle of the camera on the pitch axis (that is, the pitch angle of the camera relative to the space plane), the rotation angle on the roll axis (that is, the roll angle of the camera relative to the space plane), and The rotation angle of the yaw axis (that is, the yaw angle of the camera relative to the space plane); based on this, the target rotation matrix is determined according to the relative attitude, which may include, but is not limited to: determining the first angle according to the rotation angle of the camera on the pitch axis A rotation matrix; determine the second rotation matrix according to the rotation angle of the camera on the roll axis; determine the third rotation matrix according to the rotation angle of the camera on the yaw axis; according to the first rotation matrix, the second rotation matrix, and the third rotation The matrix determines the target rotation matrix.
  • the target rotation matrix may include three column vectors, and the target rotation parameters may be obtained according to the target rotation matrix, which may include but not limited to: determine the first column vector in the target rotation matrix as the first rotation parameter, and determine The second column vector in the target rotation matrix is determined as the second rotation parameter; the first rotation parameter and the second rotation parameter are determined as the target rotation parameter.
  • the relative posture also includes a translation parameter of the space plane and the shooting device (that is, a translation parameter of the shooting device relative to the space plane), and obtaining a projection matrix according to the relative posture and the target rotation parameter may include but not limited to: The target rotation parameter, the normalization coefficient, the internal parameter matrix of the shooting device, the space plane, and the translation parameter of the shooting device are obtained, and the projection matrix is obtained.
  • converting the head-up image into a bird's-eye view image according to the projection matrix may include, but is not limited to: for each first pixel in the head-up image, the The position information is converted into position information of the second pixel in the overhead image; based on this, the overhead image can be obtained according to the position information of each second pixel.
  • the position information of the first pixel point is converted into the position information of the second pixel point in the bird's-eye view image according to the projection matrix, which may include but is not limited to: obtaining the inverse matrix corresponding to the projection matrix, and according to the The inverse matrix converts the position information of the first pixel point to the position information of the second pixel point in the bird's-eye view image, that is, each first pixel point corresponds to a second pixel point.
  • the lane line can be detected based on the bird's-eye image.
  • the lane line can be positioned according to the bird's-eye view image.
  • the lane line detection can be performed based on the top view image (not the lane line detection based on the head-up image) to improve the accuracy of the lane line detection.
  • the lane line positioning is performed based on the top view image (not the lane line positioning based on the head-up image) to improve the accuracy of the lane line positioning.
  • the accuracy of the lane line detection can be improved, and the actual positional relationship between the lane line and the vehicle can be accurately located.
  • the head-up image can be converted into a top-down image, and the top-down image can be used to detect the lane line, thereby improving the accuracy of the lane-line detection result.
  • the head-up image can be converted into a bird's-eye view image, and the bird's-eye view image is used to locate the lane line, thereby improving the accuracy of the lane line positioning result and accurately knowing the actual position of the lane line.
  • An embodiment of the present invention proposes an image processing method, which can be applied to a driving assistance system, and the driving assistance system may include at least one photographing device.
  • the driving assistance system can be mounted on a mobile platform (such as unmanned vehicles, ordinary vehicles, etc.).
  • a mobile platform such as unmanned vehicles, ordinary vehicles, etc.
  • the driving assistance system can also be mounted on other vehicles. limit.
  • the method may include:
  • Step 201 Obtain a head-up image containing a target object through a camera.
  • the head-up image in at least one direction of the front, back, left, or right direction of the mobile platform may be acquired by the shooting device, and the head-up image includes a target object.
  • Step 202 Determine the space plane corresponding to the target object according to the first pose information of the mobile platform.
  • the first posture information of the mobile platform may be obtained, and the space plane may be determined according to the first posture information.
  • the space plane refers to the position plane of the target object (such as road surface or ground) in the world coordinate system, that is, the position of the space plane in the world coordinate system.
  • the process of acquiring the first posture information of the mobile platform may include a posture sensor, the posture sensor collects the first posture information of the mobile platform, and provides the first posture information to the driving assistance system to enable driving assistance The system acquires the first posture information of the mobile platform.
  • the first posture information of the mobile platform can also be obtained in other ways, which is not limited.
  • the attitude sensor is a high-performance three-dimensional motion attitude measurement system, which can include three-axis gyroscope, three-axis accelerometer (ie IMU), three-axis electronic compass and other auxiliary motion sensors, and output calibration through the embedded processor Sensor data such as angular velocity, acceleration, magnetic data, etc., and then, posture information can be measured based on the sensor data, and there is no restriction on the manner of acquiring posture information.
  • the process of determining the space plane corresponding to the target object according to the first pose information, after obtaining the first pose information of the mobile platform, the space plane can be determined according to the first pose information. I will not repeat them here.
  • Step 203 Determine the relative posture of the space plane and the shooting device.
  • the relative posture refers to the relative posture of the camera relative to the space plane, and can also be understood as the external parameter (ie, positional relationship) of the camera relative to the space plane.
  • the relative attitude may include but is not limited to: the pitch angle of the camera relative to the space plane, the roll angle of the camera relative to the space plane, the yaw angle of the camera relative to the space plane, and the height of the camera relative to the space plane , The translation of the camera relative to the space plane.
  • Step 204 Acquire a projection matrix corresponding to the head-up image according to the relative posture.
  • a target rotation matrix may be determined according to the relative pose
  • a target rotation parameter may be obtained according to the target rotation matrix
  • a projection matrix corresponding to the head-up image may be obtained according to the relative pose and the target rotation parameter.
  • Step 205 Convert the head-up image to the top-down image according to the projection matrix.
  • the position information of the first pixel is converted into the position information of the second pixel in the overhead image according to the projection matrix; based on this, it can be
  • the top view image is obtained according to the position information of each second pixel.
  • the position information of the first pixel point is converted into the position information of the second pixel point in the bird's-eye view image according to the projection matrix, which may include but is not limited to: obtaining the inverse matrix corresponding to the projection matrix, and according to the The inverse matrix converts the position information of the first pixel point to the position information of the second pixel point in the bird's-eye view image, that is, each first pixel point corresponds to a second pixel point.
  • An embodiment of the present invention proposes an image processing method, which can be applied to a driving assistance system, and the driving assistance system may include at least one photographing device.
  • the driving assistance system can also be equipped with driving assistance equipment (such as ADAS equipment, etc.), and the driving assistance equipment is installed on a mobile platform (such as unmanned vehicles, ordinary vehicles, etc.), of course, the above is only the application of the present invention
  • the driving assistance system can also be mounted on other vehicles, and there is no restriction on this.
  • the method may include:
  • Step 301 Obtain a head-up image containing a target object through a camera.
  • the head-up image in at least one direction of the front, rear, left, or right direction of the driving assistance device may be acquired by the shooting device, and the head-up image includes a target object.
  • Step 302 Determine a space plane corresponding to the target object according to the second posture information of the driving assistance device.
  • the space plane refers to the target object, that is, the position plane of the road surface or the ground in the world coordinate system.
  • the second posture information of the driving assistance device may be acquired, and the space plane may be determined according to the second posture information.
  • the driving assistance device may include a posture sensor, and this posture sensor is used to collect the second posture information of the driving assistance device and provide the second posture information to the driving assistance system, so that the driving assistance system acquires the second posture of the driving assistance device information.
  • the mobile platform may include an attitude sensor, the attitude sensor collects the first attitude information of the mobile platform, and provides the first attitude information to the driving assistance system.
  • the driving assistance system may use the first attitude information of the mobile platform as the first position of the driving assistance device.
  • the second posture information is the second posture information of the driving assistance device.
  • the second posture information can also be obtained in other ways, which is not limited.
  • Step 303 Determine the relative posture of the space plane and the shooting device.
  • the relative posture refers to the relative posture of the camera relative to the space plane, and can also be understood as the external parameter (ie, positional relationship) of the camera relative to the space plane.
  • the relative attitude may include but is not limited to: the pitch angle of the camera relative to the space plane, the roll angle of the camera relative to the space plane, the yaw angle of the camera relative to the space plane, and the height of the camera relative to the space plane , The translation of the camera relative to the space plane.
  • Step 304 Obtain a projection matrix corresponding to the head-up image according to the relative posture.
  • a target rotation matrix may be determined according to the relative pose
  • a target rotation parameter may be obtained according to the target rotation matrix
  • a projection matrix corresponding to the head-up image may be obtained according to the relative pose and the target rotation parameter.
  • Step 305 Convert the head-up image to the top-down image according to the projection matrix.
  • the position information of the first pixel is converted into the position information of the second pixel in the overhead image according to the projection matrix; based on this, it can be
  • the top view image is obtained according to the position information of each second pixel.
  • the position information of the first pixel point is converted into the position information of the second pixel point in the bird's-eye view image according to the projection matrix, which may include but is not limited to: obtaining the inverse matrix corresponding to the projection matrix, and according to the The inverse matrix converts the position information of the first pixel point to the position information of the second pixel point in the bird's-eye view image, that is, each first pixel point corresponds to a second pixel point.
  • Embodiment 4 A subsequent description will be made by taking a mobile platform as a vehicle and a shooting device as a camera as an example.
  • the traditional lane line algorithm can collect the head-up image through the camera, and use the head-up image to detect and locate the lane line.
  • the image on the left is a schematic diagram of the head-up image.
  • the road surface arrow and the lane line are twisted, and the shape is related to the position of the vehicle.
  • the lane line cannot be correctly performed based on the left head-up image in FIG. 4A. Detection and positioning.
  • the head-up image is converted into a bird's-eye image, and the bird's-eye image is used to detect and locate the lane line.
  • the image on the right is a schematic diagram of a top-down image.
  • the arrows of the road surface markers and the lane lines are restored to true scales.
  • the positions of the points on the road surface directly correspond to the real positions, and the positional relationship between a certain point and the vehicle can be directly obtained. It can meet the requirements of ADAS function and automatic driving function. Obviously, the detection and positioning of the lane line can be correctly performed based on the top view image on the right side of FIG. 4A.
  • the accuracy of road surface marker recognition can be improved, and a method for locating road surface markers (including lane lines) can be provided to assist in positioning.
  • the head-up image in order to convert the head-up image into a top-down image, it can be implemented based on the geometric knowledge of computer vision, that is, convert the head-up image into a top-down image based on homography.
  • the shape of the top-down image depends on the true shape of the head-up image of the space plane, the internal parameters of the camera, and the external parameters of the camera (that is, the camera relative to the space plane Position relationship), therefore, the pixels in the head-up image can be directly mapped to the top-down image according to the internal parameters of the camera and the external parameters of the camera, so as to correspond to the true scale of the spatial plane, improve the accuracy of lane line recognition, and provide Accurate positioning method of lane line.
  • FIG. 4B it is a schematic diagram of the relationship between the target object, the space plane and the camera.
  • the space plane is a plane including the target object, and the plane where the camera is located may be different from the space plane.
  • the target object may be a road (pavement or ground) containing lane lines as shown in the figure
  • the spatial plane may be the plane where the target object is the road surface.
  • the actual shooting screen of the camera is shown in the lower right corner of FIG. 4B, which corresponds to the head-up image on the left side of FIG. 4A.
  • homography can be expressed by the following formula, (u,v) is the pixel in the head-up image, that is, the pixel in the spatial plane, s is the normalization coefficient, M is the camera internal parameter, [r 1 r 2 r 3 t] is the external parameter of the camera to the space plane, that is, the positional relationship, r 1 is a 3*1 column vector, r 2 is a 3*1 column vector, and r 3 is a 3*1 column vector, and r 1 , r 2 and r 3 form a rotation matrix, and t is a column vector of 3*1, which represents the translation of the camera to the object plane, that is, r 1 , r 2 and r 3 form the rotation matrix and the translation t constitutes the camera pair
  • the external parameters of the space plane, (X, Y) are the pixels in the overhead image, that is, the pixels in the image coordinate system.
  • the pixels in the overhead image can be (X, Y, Z), but considering that the target object is in a plane, that is, Z is 0, therefore, the product of r 3 and Z is 0, that is to say After converting the homography formula, r 3 and Z can be eliminated from the formula, and finally the following formula can be obtained.
  • the image processing method in the embodiment of the present invention may include:
  • Step a1 Obtain a head-up image containing a target object through a camera.
  • Each pixel in the head-up image is called a first pixel, and each first pixel can be the above (u, v).
  • Step a2 Determine the space plane corresponding to the target object.
  • the spatial plane refers to the position plane of the target object, that is, the road surface or ground on which it is located in the world coordinate system.
  • Step a3 Determine the relative posture of the space plane and the camera.
  • the relative posture can be the external parameter of the camera relative to the space plane (that is, the positional relationship), such as the pitch angle of the camera relative to the space plane, the roll angle of the camera relative to the space plane, and the camera relative to the space.
  • Step a4 Determine the target rotation matrix according to the relative posture.
  • a pitch angle of the camera relative to the space plane, a roll angle of the camera relative to the space plane, and a yaw angle of the camera relative to the space plane can be determined.
  • the first rotation matrix R x can be determined according to the following formula based on the camera rotation angle on the pitch axis; the second rotation can be determined based on the camera rotation angle on the roll axis, and the second rotation can be determined based on the following formula Matrix R y ; it can be based on the rotation angle (yaw) of the camera on the yaw axis, and the third rotation matrix R z can be determined according to the following formula.
  • the target rotation matrix R After obtaining the first rotation matrix, the second rotation matrix, and the third rotation matrix, based on the first rotation matrix, the second rotation matrix, and the third rotation matrix, the target rotation matrix R may be determined according to the following formula.
  • Step a5 Obtain the target rotation parameter according to the target rotation matrix.
  • the first column vector in the target rotation matrix R can be determined as the first rotation parameter
  • the second column vector in the target rotation matrix R can be determined as the second rotation parameter
  • the first rotation parameter and the first The second rotation parameter is determined as the target rotation parameter.
  • the first rotation parameter is r 1 in the above formula
  • r 1 is a column vector of 3*1
  • the second rotation parameter is r 2 in the above formula, r 2 is a column vector of 3*1.
  • Step a6 Obtain a projection matrix according to the target rotation parameters r 1 and r 2 , the normalization coefficient, the camera's internal parameter matrix, and the translation parameter t.
  • the projection matrix may be H in the above formula.
  • the normalization coefficient can be s in the above formula
  • the projection matrix H can be determined.
  • the camera's internal parameter matrix M can be In the aforementioned internal reference matrix M, f x , f y can represent the focal length of the camera, c x , c y can represent the position of the camera lens optical axis through the imaging sensor f x , f y , c x , c y is a known value, there is no restriction on this.
  • the head-up image can be converted into a bird's-eye view image according to the projection matrix.
  • the position information of the first pixel can be converted to the position of the second pixel (X, Y) in the bird's-eye view image according to the projection matrix H Information, and obtain a bird's-eye view image according to the position information of each second pixel (X, Y), that is, the second pixel constitutes a bird's-eye view image.
  • H Information the projection matrix
  • an embodiment of the present invention also provides a driving assistance device 50 that includes at least one photographing device 51, a processor 52, and a memory 53; the driving The auxiliary device 50 is provided on the vehicle and communicates with the vehicle; the memory 53 is used to store computer instructions executable by the processor;
  • the shooting device 51 is configured to collect a head-up image including a target object, and send the head-up image including the target object to the processor 52;
  • the processor 52 is configured to read computer instructions from the memory 53 to implement:
  • the head-up image is converted into a top-down image according to the relative posture.
  • the imaging device 51 is configured to acquire the head-up image in at least one direction of the front, rear, left, or right of the driving assistance device.
  • the processor 52 determines the space plane corresponding to the target object, it is specifically used to:
  • the spatial plane is determined according to the second posture information.
  • the processor 52 converts the head-up image into a top-down image according to the relative posture, it is specifically used to: obtain a projection matrix corresponding to the head-up image according to the relative posture;
  • the head-up image is converted into a top-down image according to the projection matrix.
  • the processor 52 obtains the projection matrix corresponding to the head-up image according to the relative posture, it is specifically used to: determine the target rotation matrix according to the relative posture;
  • the relative attitude includes the rotation angle of the shooting device on the pitch axis, the rotation angle on the roll axis, and the rotation angle on the yaw axis; the processor 52 is specifically used when determining the target rotation matrix according to the relative attitude : Determine the first rotation matrix according to the rotation angle of the shooting device on the pitch axis;
  • the target rotation matrix is determined according to the first rotation matrix, the second rotation matrix, and the third rotation matrix.
  • the processor 52 obtains a target rotation parameter according to the target rotation matrix, it is specifically used to:
  • the first rotation parameter and the second rotation parameter are determined as target rotation parameters.
  • the relative posture also includes translation parameters of the spatial plane and the shooting device; the processor 52 is specifically used when acquiring the projection matrix according to the relative posture and the target rotation parameter: according to the target rotation
  • the parameters, the normalization coefficient, the internal parameter matrix of the photographing device, the spatial plane and the translation parameters of the photographing device are used to obtain the projection matrix.
  • the processor 52 converts the head-up image into a top-down image according to the projection matrix, it is specifically used to: for each first pixel in the head-up image, convert the first pixel according to the projection matrix The position information of is converted into the position information of the second pixel in the overhead image;
  • the top view image is obtained according to the position information of each second pixel.
  • the processor 52 converts the position information of the first pixel into the position information of the second pixel in the bird's-eye view image according to the projection matrix, it is specifically used to:
  • an embodiment of the present invention also provides a vehicle equipped with a driving assistance system.
  • the vehicle includes at least one camera, a processor, and a memory.
  • the memory is used to store the processor.
  • Computer instructions executed; the shooting device is used to collect a head-up image containing a target object, and send the head-up image containing the target object to the processor;
  • the processor is configured to read computer instructions from the memory to implement:
  • the head-up image is converted into a top-down image according to the relative posture.
  • the photographing device is configured to acquire the head-up image in at least one direction of the front, rear, left, or right of the vehicle.
  • the processor determines the space plane corresponding to the target object, it is specifically used to: obtain first pose information of the vehicle; and determine the space plane according to the first pose information.
  • the processor converts the head-up image into a bird's-eye view image according to the relative posture, it is specifically used to: obtain a projection matrix corresponding to the head-up image according to the relative posture;
  • the head-up image is converted into a top-down image according to the projection matrix.
  • the processor obtains the projection matrix corresponding to the head-up image according to the relative posture, it is specifically used to: determine a target rotation matrix according to the relative posture;
  • the relative attitude includes the rotation angle of the shooting device on the pitch axis, the rotation angle on the roll axis, and the rotation angle on the yaw axis; when the processor determines the target rotation matrix according to the relative attitude, it is specifically used to: Determine the first rotation matrix according to the rotation angle of the shooting device on the pitch axis;
  • the target rotation matrix is determined according to the first rotation matrix, the second rotation matrix, and the third rotation matrix.
  • the processor obtains the target rotation parameter according to the target rotation matrix, it is specifically used to:
  • the first rotation parameter and the second rotation parameter are determined as target rotation parameters.
  • the relative posture also includes translation parameters of the spatial plane and the shooting device; the processor is specifically used when acquiring the projection matrix according to the relative posture and the target rotation parameter:
  • the projection matrix is obtained according to the target rotation parameter, the normalization coefficient, the internal parameter matrix of the shooting device, the space plane, and the translation parameter of the shooting device.
  • the processor converts the head-up image into a top-down image according to the projection matrix, it is specifically used to: for each first pixel in the head-up image, convert the first pixel according to the projection matrix.
  • the position information is converted into the position information of the second pixel in the overhead image;
  • the top view image is obtained according to the position information of each second pixel.
  • the processor converts the position information of the first pixel point into the position information of the second pixel point in the bird's-eye view image according to the projection matrix, it is specifically used to:
  • An embodiment of the present invention further provides a computer-readable storage medium, on which computer instructions are stored, and when the computer instructions are executed, the above image processing method is implemented.
  • the system, device, module or unit explained in the above embodiments may be implemented by a computer chip or entity, or by a product with a certain function.
  • a typical implementation device is a computer, and the specific form of the computer may be a personal computer, a laptop computer, a cellular phone, a camera phone, a smart phone, a personal digital assistant, a media player, a navigation device, an email sending and receiving device, and a game control Desk, tablet computer, wearable device, or any combination of these devices.
  • embodiments of the present invention may be provided as methods, systems, or computer program products. Therefore, the present invention may take the form of an entirely hardware embodiment, an entirely software embodiment, or an embodiment combining software and hardware. Moreover, embodiments of the present invention may take the form of computer program products implemented on one or more computer usable storage media (including but not limited to disk storage, CD-ROM, optical storage, etc.) containing computer usable program code.
  • computer usable storage media including but not limited to disk storage, CD-ROM, optical storage, etc.
  • each flow and/or block in the flowchart and/or block diagram and a combination of the flow and/or block in the flowchart and/or block diagram may be implemented by computer program instructions.
  • These computer program instructions can be provided to the processor of a general-purpose computer, special-purpose computer, embedded processor, or other programmable data processing device to produce a machine that allows instructions generated by the processor of the computer or other programmable data processing device to be used
  • these computer program instructions can also be stored in a computer readable memory that can guide a computer or other programmable data processing device to work in a specific manner, so that the instructions stored in the computer readable memory produce a manufactured product including an instruction device,
  • the instruction device implements the functions specified in one block or multiple blocks of one flow or multiple blocks in the flowchart and/or block diagram.
  • These computer program instructions can also be loaded into a computer or other programmable data processing device so that a series of operating steps are performed on the computer or other programmable device to generate computer-implemented processing, thereby executing instructions on the computer or other programmable device Provides steps for implementing the functions specified in the flowchart flow one flow or flows and/or the block diagram one block or multiple blocks.

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Multimedia (AREA)
  • Image Analysis (AREA)
  • Image Processing (AREA)
  • Traffic Control Systems (AREA)

Abstract

一种图像处理方法、设备及计算机可读存储介质,所述方法包括:通过拍摄装置获取包含目标物体的平视图像;确定与所述目标物体对应的空间平面;确定所述空间平面和所述拍摄装置的相对姿态;根据所述相对姿态将所述平视图像转换为俯视图像。应用本发明实施例,可以提高车道线的检测的准确性,并准确的定位车道线与车实际的位置关系。

Description

图像处理方法、设备及计算机可读存储介质 技术领域
本发明实施例涉及图像处理技术领域,尤其是涉及一种图像处理方法、设备及计算机可读存储介质。
背景技术
在自动驾驶以及ADAS(Advanced Driver Assistance Systems,高级驾驶辅助系统)等领域,车道线算法充当着重要的角色,车道线算法的准确性将直接影响系统的性能和可靠性,车道线算法是自动驾驶控车的重要前提。
车道线算法分为两个层面,一是车道线的检测,二是车道线的定位,即计算车道线与车实际的位置关系。传统的车道线检测算法,可以通过拍摄装置采集平视图像,并利用平视图像进行车道线的检测。传统的车道线定位算法,可以通过拍摄装置采集平视图像,并利用平视图像进行车道线的定位。
在利用平视图像进行车道线的检测时,存在检测结果不准确的问题,例如,平视图像中车道线的大小和性质都是经过透视投影,有“近大远小”的效应,导致远处的有些路面标志物形状扭曲,无法正确检测。在利用平视图像进行车道线的定位时,存在定位结果不准确的问题,例如,路面标志物在平视图像中的形状和大小,与拍摄装置内参、拍摄装置和路面的位置关系,耦合在一起,无法直接通过平视图像中的位置,获知车道线的实际位置。
发明内容
本发明提供一种图像处理方法、设备及计算机可读存储介质,可以提高车道线的检测的准确性,并准确的定位车道线与车实际的位置关系。
本发明第一方面,提供一种驾驶辅助设备,所述驾驶辅助设备包括至少一个拍摄装置、处理器和存储器;所述驾驶辅助设备设置在车辆上,并与所 述车辆通信;所述存储器,用于存储所述处理器可执行的计算机指令;
所述拍摄装置,用于采集包含目标物体的平视图像,并将包含目标物体的所述平视图像发送给所述处理器;
所述处理器,用于从所述存储器读取计算机指令以实现:
从所述拍摄装置获取包含目标物体的平视图像;
确定与所述目标物体对应的空间平面;
确定所述空间平面和所述拍摄装置的相对姿态;
根据所述相对姿态将所述平视图像转换为俯视图像。
本发明实施例第二方面,提供一种搭载驾驶辅助系统的车辆,所述车辆包括至少一个拍摄装置、处理器和存储器,所述存储器,用于存储所述处理器可执行的计算机指令;所述拍摄装置,用于采集包含目标物体的平视图像,并将包含目标物体的所述平视图像发送给所述处理器;
所述处理器,用于从所述存储器读取计算机指令以实现:
从所述拍摄装置获取包含目标物体的平视图像;
确定与所述目标物体对应的空间平面;
确定所述空间平面和所述拍摄装置的相对姿态;
根据所述相对姿态将所述平视图像转换为俯视图像。
本发明实施例第三方面,提供一种图像处理方法,应用于驾驶辅助系统,所述驾驶辅助系统包括至少一个拍摄装置,所述方法包括:
通过所述拍摄装置获取包含目标物体的平视图像;
确定与所述目标物体对应的空间平面;
确定所述空间平面和所述拍摄装置的相对姿态;
根据所述相对姿态将所述平视图像转换为俯视图像。
本发明实施例第四方面,提供一种计算机可读存储介质,所述计算机可读存储介质上存储有计算机指令,所述计算机指令被执行时,实现上述方法。
基于上述技术方案,本发明实施例中,可以提高车道线的检测的准确性,并准确的定位车道线与车实际的位置关系。具体的,可以将平视图像转换为 俯视图像,并利用俯视图像进行车道线的检测,从而提高车道线检测结果的准确性。可以将平视图像转换为俯视图像,并利用俯视图像进行车道线的定位,从而提高车道线定位结果的准确性,准确获知车道线的实际位置。
附图说明
为了更加清楚地说明本发明实施例或者现有技术中的技术方案,下面将对本发明实施例或者现有技术描述中所需要使用的附图作简单地介绍,显而易见地,下面描述中的附图仅仅是本发明中记载的一些实施例,对于本领域普通技术人员来讲,还可以根据本发明实施例的这些附图获得其它的附图。
图1是一种实施方式中的图像处理方法的实施例示意图;
图2是另一种实施方式中的图像处理方法的实施例示意图;
图3是另一种实施方式中的图像处理方法的实施例示意图;
图4A是一种实施方式中图像处理方法的平视图像和俯视图像的示意图;
图4B是一种实施方式中目标物体、空间平面和相机的关系示意图;
图5是一种实施方式中的驾驶辅助设备的实施例框图。
具体实施方式
下面将结合本发明实施例中的附图,对本发明实施例中的技术方案进行清楚、完整地描述,显然,所描述的实施例仅仅是本发明一部分实施例,而不是全部的实施例。基于本发明中的实施例,本领域普通技术人员在没有做出创造性劳动前提下所获得的所有其他实施例,都属于本发明保护的范围。另外,在不冲突的情况下,下述的实施例及实施例中的特征可以相互组合。
本发明使用的术语仅仅是出于描述特定实施例的目的,而非限制本发明。本发明和权利要求书所使用的单数形式的“一种”、“所述”和“该”也旨在包括多数形式,除非上下文清楚地表示其它含义。应当理解,本文中使用的术语“和 /或”是指包含一个或者多个相关联的列出项目的任何或所有可能的组合。
尽管在本发明可能采用术语第一、第二、第三等来描述各种信息,但这些信息不应限于这些术语。这些术语用来将同一类型的信息彼此区分开。例如,在不脱离本发明范围的情况下,第一信息也可以被称为第二信息,类似地,第二信息也可以被称为第一信息。取决于语境,此外,所使用的词语“如果”可以被解释成为“在……时”,或者“当……时”,或者“响应于确定”。
实施例1:
本发明实施例中提出一种图像处理方法,该方法可以应用于驾驶辅助系统,所述驾驶辅助系统可以包括至少一个拍摄装置。其中,所述驾驶辅助系统可以搭载于移动平台(如无人车辆、普通车辆等),或者,所述驾驶辅助系统还可以搭载于驾驶辅助设备(如ADAS设备等),且所述驾驶辅助设备设置于移动平台(如无人车辆、普通车辆等)上,当然,上述只是两个应用场景的示例,驾驶辅助系统还可以搭载于其它载具上,对此不做限制。
参见图1所示,为图像处理方法的流程示意图,该方法可以包括:
步骤101,通过拍摄装置获取包含目标物体的平视图像。
具体的,若驾驶辅助系统搭载于移动平台,则所述至少一个拍摄装置设置于移动平台上,可以通过所述拍摄装置获取所述移动平台的前方、后方、左方或右方中的至少一个方向的平视图像,平视图像包含目标物体。
若驾驶辅助系统搭载于驾驶辅助设备,则所述至少一个拍摄装置设置于驾驶辅助设备,可以通过所述拍摄装置获取所述驾驶辅助设备的前方、后方、左方或右方中的至少一个方向的平视图像,平视图像包含目标物体。
步骤102,确定与所述目标物体对应的空间平面。
具体的,若驾驶辅助系统搭载于移动平台,则可以获取所述移动平台的第一姿态信息(即移动平台当前的姿态信息),并根据所述第一姿态信息确定所述空间平面。其中,所述空间平面是指,目标物体(如路面或者地面)在世界坐标系下的位置平面,也就是空间平面在世界坐标系下的位置。
若驾驶辅助系统搭载于驾驶辅助设备,则可以获取所述驾驶辅助设备的第二姿态信息(即驾驶辅助设备当前的姿态信息),并根据所述第二姿态信息确定所述空间平面。其中,所述空间平面是指,目标物体(如路面或者地面)在世界坐标系下的位置平面,也就是空间平面在世界坐标系下的位置。
步骤103,确定所述空间平面和所述拍摄装置的相对姿态。
在一个例子中,所述相对姿态是指,所述拍摄装置相对于空间平面(如路面或者地面)的相对姿态,也可以理解为所述拍摄装置相对于空间平面的外参(即位置关系)。例如,所述相对姿态可以包括但不限于:所述拍摄装置相对于空间平面的俯仰角(pitch),所述拍摄装置相对于空间平面的横滚角(roll),所述拍摄装置相对于空间平面的偏航角(yaw),所述拍摄装置相对于空间平面的高度,所述拍摄装置相对于空间平面的平移参数。
步骤104,根据所述相对姿态将平视图像转换为俯视图像。
具体的,可以根据所述相对姿态获取平视图像对应的投影矩阵;例如,可以根据所述相对姿态确定目标旋转矩阵,根据目标旋转矩阵获取目标旋转参数,并根据所述相对姿态和目标旋转参数获取平视图像对应的投影矩阵。然后,可以根据所述投影矩阵将平视图像转换为俯视图像。
其中,所述相对姿态包括拍摄装置在俯仰轴的旋转角度(即拍摄装置相对于空间平面的俯仰角)、在横滚轴的旋转角度(即拍摄装置相对于空间平面的横滚角)、在偏航轴的旋转角度(即拍摄装置相对于空间平面的偏航角);基于此,根据所述相对姿态确定目标旋转矩阵,可以包括但不限于:根据拍摄装置在俯仰轴的旋转角度确定第一旋转矩阵;根据拍摄装置在横滚轴的旋转角度确定第二旋转矩阵;根据拍摄装置在偏航轴的旋转角度确定第三旋转矩阵;根据第一旋转矩阵、第二旋转矩阵和第三旋转矩阵确定目标旋转矩阵。
其中,目标旋转矩阵可以包括三个列向量,根据目标旋转矩阵获取目标旋转参数,可以包括但不限于:将所述目标旋转矩阵中的第一个列向量确定为第一旋转参数,并将所述目标旋转矩阵中的第二个列向量确定为第二旋转参数;将所述第一旋转参数和所述第二旋转参数确定为所述目标旋转参数。
其中,所述相对姿态还包括空间平面和拍摄装置的平移参数(即拍摄装置相对于空间平面的平移参数),根据所述相对姿态和目标旋转参数获取投影矩阵,可以包括但不限于:根据所述目标旋转参数、归一化系数、拍摄装置的内参矩阵、空间平面和拍摄装置的平移参数,获取所述投影矩阵。
在上述实施例中,根据所述投影矩阵将平视图像转换为俯视图像,可以包括但不限于:针对平视图像中的每个第一像素点,根据所述投影矩阵将所述第一像素点的位置信息转换为俯视图像中的第二像素点的位置信息;基于此,可以根据每个第二像素点的位置信息获取所述俯视图像。
其中,根据所述投影矩阵将所述第一像素点的位置信息转换为俯视图像中的第二像素点的位置信息,可以包括但不限于:获取所述投影矩阵对应的逆矩阵,并根据所述逆矩阵将所述第一像素点的位置信息转换为俯视图像中的第二像素点的位置信息,即每个第一像素点对应一个第二像素点。
在一个例子中,根据所述相对姿态将平视图像转换为俯视图像之后,若所述目标物体为车道线,则可以根据所述俯视图像进行车道线的检测。
在一个例子中,根据所述相对姿态将平视图像转换为俯视图像之后,若所述目标物体为车道线,则可以根据所述俯视图像进行车道线的定位。
综上所述,可以根据俯视图像进行车道线检测(不是根据平视图像进行车道线检测),提高车道线检测的准确性。和/或,根据俯视图像进行车道线定位(不是根据平视图像进行车道线定位),提高车道线的定位的准确性。
基于上述技术方案,本发明实施例中,可以提高车道线的检测的准确性,并准确的定位车道线与车实际的位置关系。具体的,可以将平视图像转换为俯视图像,并利用俯视图像进行车道线的检测,从而提高车道线检测结果的准确性。可以将平视图像转换为俯视图像,并利用俯视图像进行车道线的定位,从而提高车道线定位结果的准确性,准确获知车道线的实际位置。
实施例2:
本发明实施例中提出一种图像处理方法,该方法可以应用于驾驶辅助系统,所述驾驶辅助系统可以包括至少一个拍摄装置。其中,所述驾驶辅助系 统可以搭载于移动平台(如无人车辆、普通车辆等),当然,上述只是本发明应用场景的示例,驾驶辅助系统还可以搭载于其它载具上,对此不做限制。
参见图2所示,为图像处理方法的流程示意图,该方法可以包括:
步骤201,通过拍摄装置获取包含目标物体的平视图像。
具体的,可以通过所述拍摄装置获取所述移动平台的前方、后方、左方或右方中的至少一个方向的平视图像,且该平视图像包含目标物体。
步骤202,根据移动平台的第一姿态信息确定目标物体对应的空间平面。
具体的,可以获取所述移动平台的第一姿态信息,并根据所述第一姿态信息确定所述空间平面。其中,所述空间平面是指,目标物体(如路面或者地面)在世界坐标系下的位置平面,也就是空间平面在世界坐标系下的位置。
在一个例子中,获取移动平台的第一姿态信息的过程,移动平台可以包括姿态传感器,姿态传感器采集移动平台的第一姿态信息,并将第一姿态信息提供给驾驶辅助系统,以使驾驶辅助系统获取移动平台的第一姿态信息。当然,也可以采用其它方式获取移动平台的第一姿态信息,对此不做限制。
其中,姿态传感器是一种高性能三维运动姿态的测量系统,可以包含三轴陀螺仪、三轴加速度计(即IMU),三轴电子罗盘等辅助运动传感器,并通过内嵌的处理器输出校准过的角速度,加速度,磁数据等传感器数据,然后,可以基于传感器数据测量出姿态信息,对此姿态信息获取方式不做限制。
在一个例子中,根据第一姿态信息确定目标物体对应的空间平面的过程,在得到移动平台的第一姿态信息后,就可以根据该第一姿态信息确定空间平面,这个过程可以参见传统方式,在此不再赘述。
步骤203,确定所述空间平面和拍摄装置的相对姿态。
在一个例子中,所述相对姿态是指,拍摄装置相对于空间平面的相对姿态,也可以理解为拍摄装置相对于空间平面的外参(即位置关系)。例如,相对姿态可以包括但不限于:拍摄装置相对于空间平面的俯仰角,拍摄装置相对于空间平面的横滚角,拍摄装置相对于空间平面的偏航角,拍摄装置相对于空间平面的高度,拍摄装置相对于空间平面的平移。
步骤204,根据所述相对姿态获取平视图像对应的投影矩阵。
具体的,可以根据所述相对姿态确定目标旋转矩阵,根据目标旋转矩阵获取目标旋转参数,并根据所述相对姿态和目标旋转参数获取平视图像对应的投影矩阵。针对获取投影矩阵的过程,在后续实施例4中详细介绍。
步骤205,根据所述投影矩阵将平视图像转换为俯视图像。
具体的,针对所述平视图像中的每个第一像素点,根据所述投影矩阵将所述第一像素点的位置信息转换为俯视图像中的第二像素点的位置信息;基于此,可以根据每个第二像素点的位置信息获取所述俯视图像。
其中,根据所述投影矩阵将所述第一像素点的位置信息转换为俯视图像中的第二像素点的位置信息,可以包括但不限于:获取所述投影矩阵对应的逆矩阵,并根据所述逆矩阵将所述第一像素点的位置信息转换为俯视图像中的第二像素点的位置信息,即每个第一像素点对应一个第二像素点。
实施例3:
本发明实施例中提出一种图像处理方法,该方法可以应用于驾驶辅助系统,所述驾驶辅助系统可以包括至少一个拍摄装置。其中,所述驾驶辅助系统还可以搭载于驾驶辅助设备(如ADAS设备等),且所述驾驶辅助设备设置于移动平台(如无人车辆、普通车辆等)上,当然,上述只是本发明应用场景的示例,驾驶辅助系统还可以搭载于其它载具上,对此不做限制。
参见图3所示,为图像处理方法的流程示意图,该方法可以包括:
步骤301,通过拍摄装置获取包含目标物体的平视图像。
具体的,可以通过所述拍摄装置获取所述驾驶辅助设备的前方、后方、左方或右方中的至少一个方向的平视图像,该平视图像包含目标物体。
步骤302,根据驾驶辅助设备的第二姿态信息确定与目标物体对应的空间平面。空间平面是指目标物体,即路面或者地面在世界坐标系下的位置平面。
具体的,可以获取驾驶辅助设备的第二姿态信息,并根据第二姿态信息确定所述空间平面。其中,驾驶辅助设备可以包括姿态传感器,这个姿态传感器用于采集驾驶辅助设备的第二姿态信息,并将第二姿态信息提供给驾驶 辅助系统,以使驾驶辅助系统获取驾驶辅助设备的第二姿态信息。或者,移动平台可以包括姿态传感器,姿态传感器采集移动平台的第一姿态信息,并将第一姿态信息提供给驾驶辅助系统,驾驶辅助系统可以将移动平台的第一姿态信息作为驾驶辅助设备的第二姿态信息,即得到驾驶辅助设备的第二姿态信息。当然,也可以采用其它方式获取第二姿态信息,对此不做限制。
步骤303,确定所述空间平面和拍摄装置的相对姿态。
在一个例子中,所述相对姿态是指,拍摄装置相对于空间平面的相对姿态,也可以理解为拍摄装置相对于空间平面的外参(即位置关系)。例如,相对姿态可以包括但不限于:拍摄装置相对于空间平面的俯仰角,拍摄装置相对于空间平面的横滚角,拍摄装置相对于空间平面的偏航角,拍摄装置相对于空间平面的高度,拍摄装置相对于空间平面的平移。
步骤304,根据所述相对姿态获取平视图像对应的投影矩阵。
具体的,可以根据所述相对姿态确定目标旋转矩阵,根据目标旋转矩阵获取目标旋转参数,并根据所述相对姿态和目标旋转参数获取平视图像对应的投影矩阵。针对获取投影矩阵的过程,在后续实施例4中详细介绍。
步骤305,根据所述投影矩阵将平视图像转换为俯视图像。
具体的,针对所述平视图像中的每个第一像素点,根据所述投影矩阵将所述第一像素点的位置信息转换为俯视图像中的第二像素点的位置信息;基于此,可以根据每个第二像素点的位置信息获取所述俯视图像。
其中,根据所述投影矩阵将所述第一像素点的位置信息转换为俯视图像中的第二像素点的位置信息,可以包括但不限于:获取所述投影矩阵对应的逆矩阵,并根据所述逆矩阵将所述第一像素点的位置信息转换为俯视图像中的第二像素点的位置信息,即每个第一像素点对应一个第二像素点。
实施例4:以移动平台是车辆,拍摄装置是相机为例进行后续说明。
传统的车道线算法,可以通过相机采集平视图像,并利用平视图像进行车道线的检测与定位。参见图4A所示,左侧图像为平视图像的示意图,路面标志物箭头和车道线都是经过扭曲的,形状和车辆的位置有关,显然,基于 图4A左侧平视图像无法正确进行车道线的检测与定位。与上述方式不同的是,本实施例中,将平视图像转换为俯视图像,并利用俯视图像进行车道线的检测与定位。参见图4A所示,右侧图像为俯视图像的示意图,路面标志物箭头和车道线都被还原成真实尺度,路面上的点的位置直接对应真实位置,可以直接得到某一点和车辆的位置关系,可以满足ADAS功能和自动驾驶功能的需求,显然,基于图4A右侧俯视图像能够正确进行车道线的检测与定位。
而且,通过将平视图像转换为俯视图像,能够提高路面标志物识别的准确率,并提供一种路面标志物(包括车道线)的定位方法,从而辅助定位。
在一个例子中,为了将平视图像转换为俯视图像,可以基于计算机视觉的几何知识来实现,即基于单应性(Homography)将平视图像转换为俯视图像。具体的,假设平视图像是空间平面的图像,俯视图像是图像平面,则俯视图像的形状,取决于空间平面的平视图像的真实形状、相机的内参、相机的外参(即相机相对于空间平面的位置关系),因此,可以根据相机的内参和相机的外参,将平视图像中的像素直接映射到俯视图像,从而与空间平面的真实尺度对应起来,提高车道线识别的准确性,并提供车道线的准确定位手段。
参见图4B所示,为目标物体、空间平面和相机的关系示意图,空间平面是包括目标物体的平面,相机所在的平面可以与空间平面不同。例如,目标物体可以是图中所示的包含车道线的道路(路面或地面),而空间平面可以是目标物体即路面所在的平面。相机实际拍摄画面如图4B右下角所示,即与图4A左侧平视图像对应。
在一个例子中,单应性可以通过如下公式表示,(u,v)是平视图像中的像素点,即空间平面中的像素点,s为归一化系数,M为相机内参,[r 1 r 2 r 3 t]是相机对空间平面的外参,即位置关系,r 1为3*1的列向量,r 2为3*1的列向量,r 3为3*1的列向量,且r 1、r 2和r 3构成旋转矩阵,t为3*1的列向量,表示相机到物体平面的平移,即,r 1、r 2和r 3构成旋转矩阵与平移t就构成了相机对空间平面的外参,(X,Y)是俯视图像中的像素点,即图像坐标系中的像素点。
Figure PCTCN2018124726-appb-000001
在上述公式中,俯视图像中的像素点可以为(X,Y,Z),但是,考虑到目标物体在一个平面,即Z为0,因此,r 3与Z的乘积为0,也就是说,在对单应性的公式进行转换后,可以从公式中消除r 3与Z,最终可以得到如下公式。
Figure PCTCN2018124726-appb-000002
在上述公式中,假设H=sM[r 1 r 2 t],则上述公式可以转换为如下转换矩阵:
Figure PCTCN2018124726-appb-000003
进一步的,公式两边同时乘以H的逆矩阵,可以得到如下转换矩阵:
Figure PCTCN2018124726-appb-000004
从上述公式可以看出,在已知H和(u,v)的情况下,就可以得到(X,Y)。
在上述应用场景下,本发明实施例中的图像处理方法可以包括:
步骤a1、通过相机获取包含目标物体的平视图像,该平视图像中的每个像素点称为第一像素点,且每个第一像素点可以为上述(u,v)。
步骤a2,确定与所述目标物体对应的空间平面。所述空间平面是指,目标物体,即其所在的路面或者地面在世界坐标系下的位置平面。
步骤a3,确定空间平面和相机的相对姿态。
其中,相对姿态可以为相机相对于空间平面的外参(即位置关系),如相机相对于空间平面的俯仰角(pitch),相机相对于空间平面的横滚角(roll),相机相对于空间平面的偏航角(yaw),相机相对于空间平面的高度,相机相对于空间平面的平移参数,即上述公式中的t。
步骤a4,根据相对姿态确定目标旋转矩阵。
例如,基于上述相对姿态,可以确定相机相对于空间平面的俯仰角 (pitch),相机相对于空间平面的横滚角(roll),相机相对于空间平面的偏航角(yaw)。进一步的,可以基于相机在俯仰轴的旋转角度(pitch),可以根据如下公式确定第一旋转矩阵R x;可以基于相机在横滚轴的旋转角度(roll),可以根据如下公式确定第二旋转矩阵R y;可以基于相机在偏航轴的旋转角度(yaw),可以根据如下公式确定第三旋转矩阵R z
Figure PCTCN2018124726-appb-000005
Figure PCTCN2018124726-appb-000006
Figure PCTCN2018124726-appb-000007
在得到第一旋转矩阵、第二旋转矩阵和第三旋转矩阵后,基于第一旋转矩阵、第二旋转矩阵和第三旋转矩阵,可以根据如下公式确定目标旋转矩阵R。
Figure PCTCN2018124726-appb-000008
步骤a5,根据目标旋转矩阵获取目标旋转参数。
例如,可以将目标旋转矩阵R中的第一个列向量确定为第一旋转参数,并将目标旋转矩阵R中的第二个列向量确定为第二旋转参数,并将第一旋转参数和第二旋转参数确定为目标旋转参数。第一旋转参数为上述公式中的r 1,r 1为3*1的列向量,第二旋转参数为上述公式中的r 2,r 2为3*1的列向量。
步骤a6,根据目标旋转参数r 1和r 2、归一化系数、相机的内参矩阵、平移参数t,获取投影矩阵,该投影矩阵可以为上述公式中的H。
其中,归一化系数可以为上述公式中的s,相机的内参矩阵可以为上述公式中的M,参见上述公式H=sM[r 1 r 2 t],在目标旋转参数r 1和r 2、归一化系数 s、相机的内参矩阵M、平移参数t均已知的情况下,可以确定投影矩阵H。
在上述公式中,相机的内参矩阵M可以为
Figure PCTCN2018124726-appb-000009
在上述的内参矩阵M中,f x,f y表征的可以是相机的焦距,c x,c y表征的可以是相机镜头光轴穿过成像传感器的位置f x,f y,c x,c y均为已知值,对此不做限制。
步骤a7,可以根据所述投影矩阵将平视图像转换为俯视图像。
具体的,针对平视图像中的每个第一像素点(u,v),可以根据投影矩阵H将第一像素点的位置信息转换为俯视图像中的第二像素点(X,Y)的位置信息,并根据每个第二像素点(X,Y)的位置信息获取俯视图像,即第二像素点组成俯视图像。例如,基于投影矩阵H的逆矩阵,可以参见上述公式将第一像素点(u,v)的位置信息转换为第二像素点(X,Y)的位置信息,在此不再赘述。
实施例5:
基于与上述方法同样的构思,参见图5所示,本发明实施例中还提供一种驾驶辅助设备50,所述驾驶辅助设备包括至少一个拍摄装置51、处理器52和存储器53;所述驾驶辅助设备50设置在车辆上,并与所述车辆通信;所述存储器53,用于存储所述处理器可执行的计算机指令;
所述拍摄装置51,用于采集包含目标物体的平视图像,并将包含目标物体的所述平视图像发送给所述处理器52;
所述处理器52,用于从所述存储器53读取计算机指令以实现:
从所述拍摄装置51获取包含目标物体的平视图像;
确定与所述目标物体对应的空间平面;
确定所述空间平面和所述拍摄装置的相对姿态;
根据所述相对姿态将所述平视图像转换为俯视图像。
所述拍摄装置51,用于获取所述驾驶辅助设备的前方、后方、左方或者右方中的至少一个方向的所述平视图像。
所述处理器52确定与所述目标物体对应的空间平面时具体用于:
获取所述驾驶辅助设备的第二姿态信息;
根据所述第二姿态信息确定所述空间平面。
所述处理器52根据所述相对姿态将所述平视图像转换为俯视图像时具体用于:根据所述相对姿态获取所述平视图像对应的投影矩阵;
根据所述投影矩阵将所述平视图像转换为俯视图像。
所述处理器52根据所述相对姿态获取所述平视图像对应的投影矩阵时具体用于:根据所述相对姿态确定目标旋转矩阵;
根据所述目标旋转矩阵获取目标旋转参数;
根据所述相对姿态和所述目标旋转参数获取所述投影矩阵。
所述相对姿态包括所述拍摄装置在俯仰轴的旋转角度、在横滚轴的旋转角度、在偏航轴的旋转角度;所述处理器52根据所述相对姿态确定目标旋转矩阵时具体用于:根据所述拍摄装置在俯仰轴的旋转角度确定第一旋转矩阵;
根据所述拍摄装置在横滚轴的旋转角度确定第二旋转矩阵;
根据所述拍摄装置在偏航轴的旋转角度确定第三旋转矩阵;
根据第一旋转矩阵、第二旋转矩阵和第三旋转矩阵确定目标旋转矩阵。
所述处理器52根据所述目标旋转矩阵获取目标旋转参数时具体用于:
将所述目标旋转矩阵中的第一个列向量确定为第一旋转参数;
将所述目标旋转矩阵中的第二个列向量确定为第二旋转参数;
将所述第一旋转参数和所述第二旋转参数确定为目标旋转参数。
所述相对姿态还包括所述空间平面和所述拍摄装置的平移参数;所述处理器52根据所述相对姿态和所述目标旋转参数获取所述投影矩阵时具体用于:根据所述目标旋转参数、归一化系数、所述拍摄装置的内参矩阵、所述空间平面和所述拍摄装置的平移参数,获取所述投影矩阵。
所述处理器52根据所述投影矩阵将所述平视图像转换为俯视图像时具体用于:针对所述平视图像中的每个第一像素点,根据所述投影矩阵将所述第一像素点的位置信息转换为俯视图像中的第二像素点的位置信息;
根据每个第二像素点的位置信息获取所述俯视图像。
所述处理器52根据所述投影矩阵将所述第一像素点的位置信息转换为俯视图像中的第二像素点的位置信息时具体用于:
获取所述投影矩阵对应的逆矩阵,并根据所述逆矩阵将所述第一像素点的位置信息转换为俯视图像中的第二像素点的位置信息。
实施例6:
基于与上述方法同样的构思,本发明实施例中还提供一种搭载驾驶辅助系统的车辆,所述车辆包括至少一个拍摄装置、处理器和存储器,所述存储器,用于存储所述处理器可执行的计算机指令;所述拍摄装置,用于采集包含目标物体的平视图像,将包含目标物体的所述平视图像发送给所述处理器;
所述处理器,用于从所述存储器读取计算机指令以实现:
从所述拍摄装置获取包含目标物体的平视图像;
确定与所述目标物体对应的空间平面;
确定所述空间平面和所述拍摄装置的相对姿态;
根据所述相对姿态将所述平视图像转换为俯视图像。
所述拍摄装置,用于获取所述车辆的前方、后方、左方或者右方中的至少一个方向的所述平视图像。
所述处理器确定与所述目标物体对应的空间平面时具体用于:获取所述车辆的第一姿态信息;根据所述第一姿态信息确定所述空间平面。
所述处理器根据所述相对姿态将所述平视图像转换为俯视图像时具体用于:根据所述相对姿态获取所述平视图像对应的投影矩阵;
根据所述投影矩阵将所述平视图像转换为俯视图像。
所述处理器根据所述相对姿态获取所述平视图像对应的投影矩阵时具体用于:根据所述相对姿态确定目标旋转矩阵;
根据所述目标旋转矩阵获取目标旋转参数;
根据所述相对姿态和所述目标旋转参数获取所述投影矩阵。
所述相对姿态包括所述拍摄装置在俯仰轴的旋转角度、在横滚轴的旋转角度、在偏航轴的旋转角度;所述处理器根据所述相对姿态确定目标旋转矩 阵时具体用于:根据所述拍摄装置在俯仰轴的旋转角度确定第一旋转矩阵;
根据所述拍摄装置在横滚轴的旋转角度确定第二旋转矩阵;
根据所述拍摄装置在偏航轴的旋转角度确定第三旋转矩阵;
根据第一旋转矩阵、第二旋转矩阵和第三旋转矩阵确定目标旋转矩阵。
所述处理器根据所述目标旋转矩阵获取目标旋转参数时具体用于:
将所述目标旋转矩阵中的第一个列向量确定为第一旋转参数;
将所述目标旋转矩阵中的第二个列向量确定为第二旋转参数;
将所述第一旋转参数和所述第二旋转参数确定为目标旋转参数。
所述相对姿态还包括所述空间平面和所述拍摄装置的平移参数;所述处理器根据所述相对姿态和所述目标旋转参数获取所述投影矩阵时具体用于:
根据所述目标旋转参数、归一化系数、所述拍摄装置的内参矩阵、所述空间平面和所述拍摄装置的平移参数,获取所述投影矩阵。
所述处理器根据所述投影矩阵将所述平视图像转换为俯视图像时具体用于:针对所述平视图像中的每个第一像素点,根据所述投影矩阵将所述第一像素点的位置信息转换为俯视图像中的第二像素点的位置信息;
根据每个第二像素点的位置信息获取所述俯视图像。
所述处理器根据所述投影矩阵将所述第一像素点的位置信息转换为俯视图像中的第二像素点的位置信息时具体用于:
获取所述投影矩阵对应的逆矩阵,并根据所述逆矩阵将所述第一像素点的位置信息转换为俯视图像中的第二像素点的位置信息。
实施例7:
本发明实施例还提供一种计算机可读存储介质,所述计算机可读存储介质上存储有计算机指令,所述计算机指令被执行时,实现上述图像处理方法。
上述实施例阐明的系统、装置、模块或单元,可以由计算机芯片或实体实现,或者由具有某种功能的产品来实现。一种典型的实现设备为计算机,计算机的具体形式可以是个人计算机、膝上型计算机、蜂窝电话、相机电话、智能电话、个人数字助理、媒体播放器、导航设备、电子邮件收发设备、游 戏控制台、平板计算机、可穿戴设备或者这些设备中的任意几种设备的组合。
为了描述的方便,描述以上装置时以功能分为各种单元分别描述。当然,在实施本发明时可以把各单元的功能在同一个或多个软件和/或硬件中实现。
本领域内的技术人员应明白,本发明实施例可提供为方法、系统、或计算机程序产品。因此,本发明可采用完全硬件实施例、完全软件实施例、或结合软件和硬件方面的实施例的形式。而且,本发明实施例可采用在一个或多个其中包含有计算机可用程序代码的计算机可用存储介质(包括但不限于磁盘存储器、CD-ROM、光学存储器等)上实施的计算机程序产品的形式。
本发明是参照根据本发明实施例的方法、设备(系统)、和计算机程序产品的流程图和/或方框图来描述的。应理解可以由计算机程序指令实现流程图和/或方框图中的每一流程和/或方框、以及流程图和/或方框图中的流程和/或方框的结合。可提供这些计算机程序指令到通用计算机、专用计算机、嵌入式处理机或其它可编程数据处理设备的处理器以产生一个机器,使得通过计算机或其它可编程数据处理设备的处理器执行的指令产生用于实现在流程图一个流程或多个流程和/或方框图一个方框或多个方框中指定的功能的装置。
而且,这些计算机程序指令也可以存储在能引导计算机或其它可编程数据处理设备以特定方式工作的计算机可读存储器中,使得存储在该计算机可读存储器中的指令产生包括指令装置的制造品,该指令装置实现在流程图一个流程或者多个流程和/或方框图一个方框或者多个方框中指定的功能。
这些计算机程序指令也可装载到计算机或其它可编程数据处理设备,使得在计算机或者其它可编程设备上执行一系列操作步骤以产生计算机实现的处理,从而在计算机或其它可编程设备上执行的指令提供用于实现在流程图一个流程或多个流程和/或方框图一个方框或多个方框中指定的功能的步骤。
以上所述仅为本发明实施例而已,并不用于限制本发明。对于本领域技术人员来说,本发明可以有各种更改和变化。凡在本发明的精神和原理之内所作的任何修改、等同替换、改进,均应包含在本发明的权利要求范围之内。

Claims (35)

  1. 一种驾驶辅助设备,其特征在于,所述驾驶辅助设备包括至少一个拍摄装置、处理器和存储器;所述驾驶辅助设备设置在车辆上,并与所述车辆通信;所述存储器,用于存储所述处理器可执行的计算机指令;
    所述拍摄装置,用于采集包含目标物体的平视图像,并将包含目标物体的所述平视图像发送给所述处理器;
    所述处理器,用于从所述存储器读取计算机指令以实现:
    从所述拍摄装置获取包含目标物体的平视图像;
    确定与所述目标物体对应的空间平面;
    确定所述空间平面和所述拍摄装置的相对姿态;
    根据所述相对姿态将所述平视图像转换为俯视图像。
  2. 根据权利要求1所述的设备,其特征在于,
    所述拍摄装置,用于获取所述驾驶辅助设备的前方、后方、左方或者右方中的至少一个方向的所述平视图像。
  3. 根据权利要求1所述的设备,其特征在于,
    所述处理器确定与所述目标物体对应的空间平面时具体用于:
    获取所述驾驶辅助设备的第二姿态信息;
    根据所述第二姿态信息确定所述空间平面。
  4. 根据权利要求1所述的设备,其特征在于,所述处理器根据所述相对姿态将所述平视图像转换为俯视图像时具体用于:
    根据所述相对姿态获取所述平视图像对应的投影矩阵;
    根据所述投影矩阵将所述平视图像转换为俯视图像。
  5. 根据权利要求4所述的设备,其特征在于,所述处理器根据所述相对姿态获取所述平视图像对应的投影矩阵时具体用于:
    根据所述相对姿态确定目标旋转矩阵;
    根据所述目标旋转矩阵获取目标旋转参数;
    根据所述相对姿态和所述目标旋转参数获取所述投影矩阵。
  6. 根据权利要求5所述的设备,其特征在于,所述相对姿态包括所述拍摄装置在俯仰轴的旋转角度、在横滚轴的旋转角度、在偏航轴的旋转角度;所述处理器根据所述相对姿态确定目标旋转矩阵时具体用于:
    根据所述拍摄装置在俯仰轴的旋转角度确定第一旋转矩阵;
    根据所述拍摄装置在横滚轴的旋转角度确定第二旋转矩阵;
    根据所述拍摄装置在偏航轴的旋转角度确定第三旋转矩阵;
    根据第一旋转矩阵、第二旋转矩阵和第三旋转矩阵确定目标旋转矩阵。
  7. 根据权利要求5所述的设备,其特征在于,
    所述处理器根据所述目标旋转矩阵获取目标旋转参数时具体用于:
    将所述目标旋转矩阵中的第一个列向量确定为第一旋转参数;
    将所述目标旋转矩阵中的第二个列向量确定为第二旋转参数;
    将所述第一旋转参数和所述第二旋转参数确定为目标旋转参数。
  8. 根据权利要求5所述的设备,其特征在于,所述相对姿态还包括所述空间平面和所述拍摄装置的平移参数;所述处理器根据所述相对姿态和所述目标旋转参数获取所述投影矩阵时具体用于:
    根据所述目标旋转参数、归一化系数、所述拍摄装置的内参矩阵、所述空间平面和所述拍摄装置的平移参数,获取所述投影矩阵。
  9. 根据权利要求4所述的设备,其特征在于,所述处理器根据所述投影矩阵将所述平视图像转换为俯视图像时具体用于:
    针对所述平视图像中的每个第一像素点,根据所述投影矩阵将所述第一像素点的位置信息转换为俯视图像中的第二像素点的位置信息;
    根据每个第二像素点的位置信息获取所述俯视图像。
  10. 根据权利要求9所述的设备,其特征在于,
    所述处理器根据所述投影矩阵将所述第一像素点的位置信息转换为俯视图像中的第二像素点的位置信息时具体用于:
    获取所述投影矩阵对应的逆矩阵,并根据所述逆矩阵将所述第一像素点的位置信息转换为俯视图像中的第二像素点的位置信息。
  11. 一种搭载驾驶辅助系统的车辆,其特征在于,所述车辆包括至少一个拍摄装置、处理器和存储器,所述存储器,用于存储所述处理器可执行的计算机指令;所述拍摄装置,用于采集包含目标物体的平视图像,并将包含目标物体的所述平视图像发送给所述处理器;
    所述处理器,用于从所述存储器读取计算机指令以实现:
    从所述拍摄装置获取包含目标物体的平视图像;
    确定与所述目标物体对应的空间平面;
    确定所述空间平面和所述拍摄装置的相对姿态;
    根据所述相对姿态将所述平视图像转换为俯视图像。
  12. 根据权利要求11所述的车辆,其特征在于,所述拍摄装置,用于获取所述车辆的前方、后方、左方或者右方中的至少一个方向的所述平视图像。
  13. 根据权利要求11所述的车辆,其特征在于,
    所述处理器确定与所述目标物体对应的空间平面时具体用于:
    获取所述车辆的第一姿态信息;
    根据所述第一姿态信息确定所述空间平面。
  14. 根据权利要求11所述的车辆,其特征在于,所述处理器根据所述相对姿态将所述平视图像转换为俯视图像时具体用于:
    根据所述相对姿态获取所述平视图像对应的投影矩阵;
    根据所述投影矩阵将所述平视图像转换为俯视图像。
  15. 根据权利要求14所述的车辆,其特征在于,所述处理器根据所述相对姿态获取所述平视图像对应的投影矩阵时具体用于:
    根据所述相对姿态确定目标旋转矩阵;
    根据所述目标旋转矩阵获取目标旋转参数;
    根据所述相对姿态和所述目标旋转参数获取所述投影矩阵。
  16. 根据权利要求15所述的车辆,其特征在于,所述相对姿态包括所述拍摄装置在俯仰轴的旋转角度、在横滚轴的旋转角度、在偏航轴的旋转角度;所述处理器根据所述相对姿态确定目标旋转矩阵时具体用于:
    根据所述拍摄装置在俯仰轴的旋转角度确定第一旋转矩阵;
    根据所述拍摄装置在横滚轴的旋转角度确定第二旋转矩阵;
    根据所述拍摄装置在偏航轴的旋转角度确定第三旋转矩阵;
    根据第一旋转矩阵、第二旋转矩阵和第三旋转矩阵确定目标旋转矩阵。
  17. 根据权利要求15所述的车辆,其特征在于,
    所述处理器根据所述目标旋转矩阵获取目标旋转参数时具体用于:
    将所述目标旋转矩阵中的第一个列向量确定为第一旋转参数;
    将所述目标旋转矩阵中的第二个列向量确定为第二旋转参数;
    将所述第一旋转参数和所述第二旋转参数确定为目标旋转参数。
  18. 根据权利要求15所述的车辆,其特征在于,所述相对姿态还包括所述空间平面和所述拍摄装置的平移参数;所述处理器根据所述相对姿态和所述目标旋转参数获取所述投影矩阵时具体用于:
    根据所述目标旋转参数、归一化系数、所述拍摄装置的内参矩阵、所述空间平面和所述拍摄装置的平移参数,获取所述投影矩阵。
  19. 根据权利要求14所述的车辆,其特征在于,所述处理器根据所述投影矩阵将所述平视图像转换为俯视图像时具体用于:
    针对所述平视图像中的每个第一像素点,根据所述投影矩阵将所述第一像素点的位置信息转换为俯视图像中的第二像素点的位置信息;
    根据每个第二像素点的位置信息获取所述俯视图像。
  20. 根据权利要求19所述的车辆,其特征在于,
    所述处理器根据所述投影矩阵将所述第一像素点的位置信息转换为俯视图像中的第二像素点的位置信息时具体用于:
    获取所述投影矩阵对应的逆矩阵,并根据所述逆矩阵将所述第一像素点的位置信息转换为俯视图像中的第二像素点的位置信息。
  21. 一种图像处理方法,其特征在于,应用于驾驶辅助系统,所述驾驶辅助系统包括至少一个拍摄装置,所述方法包括:
    通过所述拍摄装置获取包含目标物体的平视图像;
    确定与所述目标物体对应的空间平面;
    确定所述空间平面和所述拍摄装置的相对姿态;
    根据所述相对姿态将所述平视图像转换为俯视图像。
  22. 根据权利要求21所述的方法,其特征在于,
    所述驾驶辅助系统搭载于移动平台;
    所述至少一个拍摄装置设置于所述移动平台上,用于获取所述移动平台的前方、后方、左方或右方中的至少一个方向的所述平视图像。
  23. 根据权利要求22所述的方法,其特征在于,
    所述确定与所述目标物体对应的空间平面,包括:
    获取所述移动平台的第一姿态信息;
    根据所述第一姿态信息确定所述空间平面。
  24. 根据权利要求21所述的方法,其特征在于,
    所述驾驶辅助系统搭载于驾驶辅助设备;
    所述至少一个拍摄装置设置于所述驾驶辅助设备,用于获取所述驾驶辅助设备的前方、后方、左方或右方中的至少一个方向的所述平视图像。
  25. 根据权利要求24所述的方法,其特征在于,
    所述确定与所述目标物体对应的空间平面,还包括:
    获取所述驾驶辅助设备的第二姿态信息;
    根据所述第二姿态信息确定所述空间平面。
  26. 根据权利要求21所述的方法,其特征在于,
    所述根据所述相对姿态将所述平视图像转换为俯视图像,包括:
    根据所述相对姿态获取所述平视图像对应的投影矩阵;
    根据所述投影矩阵将所述平视图像转换为俯视图像。
  27. 根据权利要求26所述的方法,其特征在于,
    所述根据所述相对姿态获取所述平视图像对应的投影矩阵,包括:
    根据所述相对姿态确定目标旋转矩阵;
    根据所述目标旋转矩阵获取目标旋转参数;
    根据所述相对姿态和所述目标旋转参数获取所述投影矩阵。
  28. 根据权利要求27所述的方法,其特征在于,所述相对姿态包括所述拍摄装置在俯仰轴的旋转角度、在横滚轴的旋转角度、在偏航轴的旋转角度;
    所述根据所述相对姿态确定目标旋转矩阵,包括:
    根据所述拍摄装置在俯仰轴的旋转角度确定第一旋转矩阵;
    根据所述拍摄装置在横滚轴的旋转角度确定第二旋转矩阵;
    根据所述拍摄装置在偏航轴的旋转角度确定第三旋转矩阵;
    根据第一旋转矩阵、第二旋转矩阵和第三旋转矩阵确定目标旋转矩阵。
  29. 根据权利要求27所述的方法,其特征在于,
    所述根据所述目标旋转矩阵获取目标旋转参数,包括:
    将所述目标旋转矩阵中的第一个列向量确定为第一旋转参数;
    将所述目标旋转矩阵中的第二个列向量确定为第二旋转参数;
    将所述第一旋转参数和所述第二旋转参数确定为目标旋转参数。
  30. 根据权利要求27所述的方法,其特征在于,
    所述相对姿态还包括所述空间平面和所述拍摄装置的平移参数;
    根据所述相对姿态和所述目标旋转参数获取所述投影矩阵,包括:
    根据所述目标旋转参数、归一化系数、所述拍摄装置的内参矩阵、所述空间平面和所述拍摄装置的平移参数,获取所述投影矩阵。
  31. 根据权利要求26所述的方法,其特征在于,
    所述根据所述投影矩阵将所述平视图像转换为俯视图像,包括:
    针对所述平视图像中的每个第一像素点,根据所述投影矩阵将所述第一像素点的位置信息转换为俯视图像中的第二像素点的位置信息;
    根据每个第二像素点的位置信息获取所述俯视图像。
  32. 根据权利要求31所述的方法,其特征在于,根据所述投影矩阵将所述第一像素点的位置信息转换为俯视图像中的第二像素点的位置信息,包括:
    获取所述投影矩阵对应的逆矩阵,并根据所述逆矩阵将所述第一像素点的位置信息转换为俯视图像中的第二像素点的位置信息。
  33. 根据权利要求21所述的方法,其特征在于,
    所述根据所述相对姿态将所述平视图像转换为俯视图像之后,还包括:
    若所述目标物体为车道线,则根据所述俯视图像进行车道线的检测。
  34. 根据权利要求21所述的方法,其特征在于,
    所述根据所述相对姿态将所述平视图像转换为俯视图像之后,还包括:
    若所述目标物体为车道线,则根据所述俯视图像进行车道线的定位。
  35. 一种计算机可读存储介质,其特征在于,计算机可读存储介质上存储有计算机指令,所述计算机指令被执行时,实现权利要求21-34所述的方法。
PCT/CN2018/124726 2018-12-28 2018-12-28 图像处理方法、设备及计算机可读存储介质 WO2020133172A1 (zh)

Priority Applications (2)

Application Number Priority Date Filing Date Title
CN201880068957.6A CN111279354B (zh) 2018-12-28 2018-12-28 图像处理方法、设备及计算机可读存储介质
PCT/CN2018/124726 WO2020133172A1 (zh) 2018-12-28 2018-12-28 图像处理方法、设备及计算机可读存储介质

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
PCT/CN2018/124726 WO2020133172A1 (zh) 2018-12-28 2018-12-28 图像处理方法、设备及计算机可读存储介质

Publications (1)

Publication Number Publication Date
WO2020133172A1 true WO2020133172A1 (zh) 2020-07-02

Family

ID=70999738

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2018/124726 WO2020133172A1 (zh) 2018-12-28 2018-12-28 图像处理方法、设备及计算机可读存储介质

Country Status (2)

Country Link
CN (1) CN111279354B (zh)
WO (1) WO2020133172A1 (zh)

Cited By (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113298868A (zh) * 2021-03-17 2021-08-24 阿里巴巴新加坡控股有限公司 模型建立方法、装置、电子设备、介质及程序产品
CN113450597A (zh) * 2021-06-09 2021-09-28 浙江兆晟科技股份有限公司 基于深度学习的船舶辅助航行方法及系统
CN113592940A (zh) * 2021-07-28 2021-11-02 北京地平线信息技术有限公司 基于图像确定目标物位置的方法及装置
CN114531580A (zh) * 2020-11-23 2022-05-24 北京四维图新科技股份有限公司 图像处理方法及装置
CN115063490A (zh) * 2022-06-30 2022-09-16 阿波罗智能技术(北京)有限公司 车辆相机外参标定方法、装置、电子设备及存储介质

Families Citing this family (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111959397B (zh) * 2020-08-24 2023-03-31 北京茵沃汽车科技有限公司 在全景影像中显示车底图像的方法、系统、装置及介质
CN112489113B (zh) * 2020-11-25 2024-06-11 深圳地平线机器人科技有限公司 相机外参标定方法、装置及相机外参标定系统
CN112990099B (zh) * 2021-04-14 2021-11-30 北京三快在线科技有限公司 一种车道线检测的方法以及装置
CN116993637B (zh) * 2023-07-14 2024-03-12 禾多科技(北京)有限公司 用于车道线检测的图像数据处理方法、装置、设备和介质

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101727756A (zh) * 2008-10-16 2010-06-09 财团法人工业技术研究院 交通工具移动图像辅助导引方法与系统
US20170003134A1 (en) * 2015-06-30 2017-01-05 Lg Electronics Inc. Advanced Driver Assistance Apparatus, Display Apparatus For Vehicle And Vehicle

Family Cites Families (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104036275B (zh) * 2014-05-22 2017-11-28 东软集团股份有限公司 一种车辆盲区内目标对象的检测方法及其装置
CN105447850B (zh) * 2015-11-12 2018-02-09 浙江大学 一种基于多视点图像的全景图拼接合成方法
CN105718865A (zh) * 2016-01-15 2016-06-29 武汉光庭科技有限公司 一种自动驾驶中基于双目摄像头的道路安全检测系统及其方法

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101727756A (zh) * 2008-10-16 2010-06-09 财团法人工业技术研究院 交通工具移动图像辅助导引方法与系统
US20170003134A1 (en) * 2015-06-30 2017-01-05 Lg Electronics Inc. Advanced Driver Assistance Apparatus, Display Apparatus For Vehicle And Vehicle

Cited By (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114531580A (zh) * 2020-11-23 2022-05-24 北京四维图新科技股份有限公司 图像处理方法及装置
CN114531580B (zh) * 2020-11-23 2023-11-21 北京四维图新科技股份有限公司 图像处理方法及装置
CN113298868A (zh) * 2021-03-17 2021-08-24 阿里巴巴新加坡控股有限公司 模型建立方法、装置、电子设备、介质及程序产品
CN113298868B (zh) * 2021-03-17 2024-04-05 阿里巴巴创新公司 模型建立方法、装置、电子设备、介质及程序产品
CN113450597A (zh) * 2021-06-09 2021-09-28 浙江兆晟科技股份有限公司 基于深度学习的船舶辅助航行方法及系统
CN113450597B (zh) * 2021-06-09 2022-11-29 浙江兆晟科技股份有限公司 基于深度学习的船舶辅助航行方法及系统
CN113592940A (zh) * 2021-07-28 2021-11-02 北京地平线信息技术有限公司 基于图像确定目标物位置的方法及装置
CN115063490A (zh) * 2022-06-30 2022-09-16 阿波罗智能技术(北京)有限公司 车辆相机外参标定方法、装置、电子设备及存储介质

Also Published As

Publication number Publication date
CN111279354A (zh) 2020-06-12
CN111279354B (zh) 2024-09-27

Similar Documents

Publication Publication Date Title
WO2020133172A1 (zh) 图像处理方法、设备及计算机可读存储介质
US10268201B2 (en) Vehicle automated parking system and method
CN108805934B (zh) 一种车载摄像机的外部参数标定方法及装置
US20190259176A1 (en) Method and device to determine the camera position and angle
CN111263960B (zh) 用于更新高清晰度地图的设备和方法
JP6830140B2 (ja) 運動ベクトル場の決定方法、運動ベクトル場の決定装置、機器、コンピュータ読み取り可能な記憶媒体及び車両
CN106814753B (zh) 一种目标位置矫正方法、装置及系统
CN110411457B (zh) 基于行程感知与视觉融合的定位方法、系统、终端和存储介质
KR101880185B1 (ko) 이동체 포즈 추정을 위한 전자 장치 및 그의 이동체 포즈 추정 방법
CN112444242A (zh) 一种位姿优化方法及装置
WO2019104571A1 (zh) 图像处理方法和设备
JP2020064056A (ja) 位置推定装置及び方法
KR102006291B1 (ko) 전자 장치의 이동체 포즈 추정 방법
CN110458885B (zh) 基于行程感知与视觉融合的定位系统和移动终端
US11842440B2 (en) Landmark location reconstruction in autonomous machine applications
WO2021143664A1 (zh) 在载运工具中测量目标对象距离的方法、装置和载运工具
JP2017211307A (ja) 測定装置、測定方法およびプログラム
Xian et al. Fusing stereo camera and low-cost inertial measurement unit for autonomous navigation in a tightly-coupled approach
WO2020019175A1 (zh) 图像处理方法和设备、摄像装置以及无人机
CN109658507A (zh) 信息处理方法及装置、电子设备
JP7337617B2 (ja) 推定装置、推定方法及びプログラム
CN109891188A (zh) 移动平台、摄像路径生成方法、程序、以及记录介质
CN116952229A (zh) 无人机定位方法、装置、系统和存储介质
CN112633043B (zh) 一种车道线确定方法、装置、电子设备及存储介质
CN116612459B (zh) 目标检测方法、装置、电子设备及存储介质

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 18945177

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 18945177

Country of ref document: EP

Kind code of ref document: A1