[go: up one dir, main page]
More Web Proxy on the site http://driver.im/

CN111046809B - Obstacle detection method, device, equipment and computer readable storage medium - Google Patents

Obstacle detection method, device, equipment and computer readable storage medium Download PDF

Info

Publication number
CN111046809B
CN111046809B CN201911293303.6A CN201911293303A CN111046809B CN 111046809 B CN111046809 B CN 111046809B CN 201911293303 A CN201911293303 A CN 201911293303A CN 111046809 B CN111046809 B CN 111046809B
Authority
CN
China
Prior art keywords
image frame
previous
current
current image
obstacle
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201911293303.6A
Other languages
Chinese (zh)
Other versions
CN111046809A (en
Inventor
常嘉义
梁艳菊
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Kunshan Microelectronics Technology Research Institute
Original Assignee
Kunshan Microelectronics Technology Research Institute
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Kunshan Microelectronics Technology Research Institute filed Critical Kunshan Microelectronics Technology Research Institute
Priority to CN201911293303.6A priority Critical patent/CN111046809B/en
Publication of CN111046809A publication Critical patent/CN111046809A/en
Application granted granted Critical
Publication of CN111046809B publication Critical patent/CN111046809B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/50Context or environment of the image
    • G06V20/56Context or environment of the image exterior to a vehicle by using sensors mounted on the vehicle
    • G06V20/58Recognition of moving objects or obstacles, e.g. vehicles or pedestrians; Recognition of traffic objects, e.g. traffic signs, traffic lights or roads
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/21Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
    • G06F18/214Generating training patterns; Bootstrap methods, e.g. bagging or boosting
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02TCLIMATE CHANGE MITIGATION TECHNOLOGIES RELATED TO TRANSPORTATION
    • Y02T10/00Road transport of goods or passengers
    • Y02T10/10Internal combustion engine [ICE] based vehicles
    • Y02T10/40Engine management systems

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • General Physics & Mathematics (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Evolutionary Biology (AREA)
  • Evolutionary Computation (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • General Engineering & Computer Science (AREA)
  • Artificial Intelligence (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Multimedia (AREA)
  • Image Analysis (AREA)
  • Traffic Control Systems (AREA)

Abstract

The application discloses a method for detecting an obstacle, which comprises the steps of acquiring a current image frame through real-time image acquisition; processing the current image frame and the previous image frame through a preset matching algorithm to obtain a vehicle pose corresponding to the current image frame; mapping the previous image frame to a coordinate system corresponding to the current image frame by combining the vehicle pose to obtain a transformation image; calculating pixel difference values of the current image frame and the transformation image, and marking pixels with the pixel difference values exceeding a standard threshold value as obstacle areas so as to determine obstacles; the obstacle detection method has strong applicability and high accuracy, and meanwhile, the cost waste of manpower and time can be reduced to a large extent. The application also discloses an obstacle detection device, equipment and a computer readable storage medium, which have the beneficial effects.

Description

Obstacle detection method, device, equipment and computer readable storage medium
Technical Field
The present application relates to the field of internet technologies, and in particular, to a method for detecting an obstacle, and also relates to an apparatus, a device, and a computer readable storage medium for detecting an obstacle.
Background
In the running process of the automatic driving vehicle, spontaneous intelligent anti-collision needs to be carried out, and the anti-collision premise is that the obstacle in front is detected and avoided, so that the running safety is ensured.
The existing obstacle detection method generally uses machine learning and deep learning, and the implementation process is as follows: model training is performed in advance based on a large amount of sample data, and a corresponding training model is obtained, so that obstacle detection is realized by using the training model. However, this implementation method requires a large amount of sample data collection in advance, resulting in a large waste of labor cost and time cost; in addition, the traditional obstacle detection method can only detect specific types of obstacles, and is low in applicability and accuracy.
Therefore, how to provide an obstacle detection method with strong applicability and high accuracy, and at the same time, to effectively reduce the cost waste of manpower and time is a problem to be solved by those skilled in the art.
Disclosure of Invention
The application aims to provide the obstacle detection method which has strong applicability and high accuracy and can reduce the cost waste of manpower and time to a large extent; another object of the present application is to provide an obstacle detecting apparatus, a device, and a computer-readable storage medium, which also have the above-mentioned advantageous effects.
In order to solve the above technical problems, the present application provides an obstacle detection method, including:
acquiring a current image frame through real-time image acquisition;
processing the current image frame and the previous image frame through a preset matching algorithm to obtain a vehicle pose corresponding to the current image frame;
mapping the previous image frame to a coordinate system corresponding to the current image frame by combining the vehicle pose to obtain a transformation image;
and calculating pixel difference values of the current image frame and the transformation image, and marking pixels with the pixel difference values exceeding a standard threshold value as obstacle areas so as to determine obstacles.
Preferably, the processing the current image frame and the previous image frame by a preset matching algorithm to obtain a vehicle pose corresponding to the current image frame includes:
detecting the characteristic points of the previous image frame to obtain a previous characteristic point set;
detecting characteristic points of the current image frame to obtain a current characteristic point set;
and carrying out feature point matching on the previous feature point set and the current feature point set to obtain the vehicle pose corresponding to the current image frame.
Preferably, the performing feature point matching on the previous feature point set and the current feature point set to obtain a vehicle pose corresponding to the current image frame includes:
constructing a robust kernel function related to the vehicle pose corresponding to the current image frame, the previous feature point set and the current feature point set;
and performing function optimization on the robust kernel function to obtain the corresponding vehicle pose when the robust kernel function takes the minimum value.
Preferably, the mapping the previous image frame to the coordinate system corresponding to the current image frame in combination with the vehicle pose to obtain a transformed image includes:
acquiring a pixel point set of the previous image frame based on an image coordinate system of the previous image frame;
mapping the pixel point set to a vehicle coordinate system to obtain a vehicle coordinate set;
mapping the vehicle coordinate set to a camera coordinate system to obtain a camera coordinate set;
mapping the camera coordinate set to a world coordinate system to obtain a world coordinate set;
mapping the world coordinate set to an image coordinate system corresponding to the current image frame to obtain a transformation coordinate set;
the transformed image is obtained from the transformed coordinate set.
Preferably, the obstacle detection method further includes:
and when the obstacle area exceeds the safety area, sending an alarm instruction to alarm equipment.
In order to solve the technical problem, the present application further provides an obstacle detection device, including:
the image acquisition module is used for acquiring a current image frame through real-time image acquisition;
the pose calculating module is used for processing the current image frame and the previous image frame through a preset matching algorithm to obtain the vehicle pose corresponding to the current image frame;
the coordinate conversion module is used for combining the vehicle pose, mapping the previous image frame to a coordinate system corresponding to the current image frame, and obtaining a transformation image;
and the obstacle determining module is used for calculating pixel difference values of the current image frame and the transformation image, and marking pixels with the pixel difference values exceeding a standard threshold value as obstacle areas so as to determine obstacles.
Preferably, the pose calculating module includes:
the first detection unit is used for detecting the characteristic points of the previous image frame to obtain a previous characteristic point set;
the second detection unit is used for detecting the characteristic points of the current image frame to obtain a current characteristic point set;
and the characteristic point matching unit is used for matching the characteristic points of the previous characteristic point set and the current characteristic point set to obtain the vehicle pose corresponding to the current image frame.
Preferably, the obstacle detecting apparatus further includes:
and the alarm module is used for sending an alarm instruction to alarm equipment when the obstacle area exceeds the safety area.
In order to solve the above technical problem, the present application also provides an obstacle detection apparatus, including:
a memory for storing a computer program;
and a processor for implementing any one of the above obstacle detection methods when executing the computer program.
To solve the above technical problem, the present application further provides a computer readable storage medium, on which a computer program is stored, which when executed by a processor, implements the steps of any one of the above obstacle detection methods.
The obstacle detection method provided by the application comprises the steps of acquiring a current image frame through real-time image acquisition; processing the current image frame and the previous image frame through a preset matching algorithm to obtain a vehicle pose corresponding to the current image frame; mapping the previous image frame to a coordinate system corresponding to the current image frame by combining the vehicle pose to obtain a transformation image; and calculating pixel difference values of the current image frame and the transformation image, and marking pixels with the pixel difference values exceeding a standard threshold value as obstacle areas so as to determine obstacles.
Therefore, according to the obstacle detection method provided by the application, the image frames are continuously acquired through the camera, the transformed image of the image frame at the current moment in view angle of the image frame at the previous moment is obtained through feature point matching, coordinate conversion and the like, at the moment, pixel difference value calculation is carried out on the transformed image and the current image frame, and pixels with the pixel difference value exceeding the standard threshold value correspond to the obstacle, so that obstacle detection is completed; meanwhile, the method can realize obstacle detection of various scenes, has no limitation on the types of the obstacles, has strong applicability and high accuracy, and can effectively ensure driving safety.
The obstacle detection device, the device and the computer readable storage medium provided by the application have the beneficial effects and are not described in detail herein.
Drawings
In order to more clearly illustrate the embodiments of the present application or the technical solutions in the prior art, the drawings that are required to be used in the embodiments or the description of the prior art will be briefly described below, and it is obvious that the drawings in the following description are only embodiments of the present application, and that other drawings can be obtained according to the provided drawings without inventive effort for a person skilled in the art.
FIG. 1 is a schematic flow chart of an obstacle detection method according to the present application;
fig. 2 is a schematic structural diagram of an obstacle detecting apparatus according to the present application;
fig. 3 is a schematic structural diagram of an obstacle detecting apparatus according to the present application.
Detailed Description
The core of the application is to provide an obstacle detection method which has strong applicability and high accuracy, and can reduce the cost waste of manpower and time to a great extent; another core of the present application is to provide an obstacle detecting apparatus, a device, and a computer-readable storage medium, which also have the above-described advantageous effects.
For the purpose of making the objects, technical solutions and advantages of the embodiments of the present application more apparent, the technical solutions of the embodiments of the present application will be clearly and completely described below with reference to the accompanying drawings in the embodiments of the present application, and it is apparent that the described embodiments are some embodiments of the present application, but not all embodiments of the present application. All other embodiments, which can be made by those skilled in the art based on the embodiments of the application without making any inventive effort, are intended to be within the scope of the application.
Referring to fig. 1, fig. 1 is a flow chart of an obstacle detection method provided by the present application, where the obstacle detection method may include:
s101: acquiring a current image frame through real-time image acquisition;
the method aims at realizing real-time image acquisition, and particularly can be realized through a camera installed on a vehicle, so that the main controller can detect the characteristic points of the vehicle when the vehicle runs.
In the application, the fish-eye camera is adopted to realize obstacle detection, the angle of view of the fish-eye camera can reach 180 degrees, and the field of view of the detected obstacle is wide, so that fewer fish-eye cameras can be used to cover to realize obstacle detection around a vehicle, and the hardware cost of the camera and the processor resource are effectively saved.
S102: processing the current image frame and the previous image frame through a preset matching algorithm to obtain a vehicle pose corresponding to the current image frame;
the method aims at realizing the calculation of the vehicle pose in the current image frame, and can be realized on the basis of a preset matching algorithm; the last image frame is the image frame detected and acquired at the last moment corresponding to the current moment. Therefore, the vehicle pose corresponding to the current image frame can be obtained by calculating two adjacent image frames through a preset matching algorithm. In addition, the type of the preset matching algorithm is not unique, and the preset matching algorithm is selected according to actual requirements, so that the application is not limited.
Preferably, the processing the current image frame and the previous image frame by the preset matching algorithm to obtain the vehicle pose corresponding to the current image frame may include: detecting the characteristic points of the previous image frame to obtain a previous characteristic point set; detecting characteristic points of the current image frame to obtain a current characteristic point set; and performing feature point matching on the previous feature point set and the current feature point set to obtain the vehicle pose corresponding to the current image frame.
The preferred embodiment provides a specific type of preset matching algorithm, namely a feature point matching algorithm. Specifically, feature point detection is performed on a current image frame and a previous image frame to obtain a corresponding current feature point set and a previous feature point set, and it should be noted that the feature points in the current feature point set and the previous feature point set are mutually corresponding, that is, the distribution positions on different image frames are the same, and of course, the application is not limited specifically for the selected positions and the number of the feature points on the image frames. Further, feature point matching is performed on the current feature set and the previous feature set, and the vehicle pose corresponding to the current image frame can be obtained.
Preferably, the performing feature point matching on the previous feature point set and the current feature point set to obtain a vehicle pose corresponding to the current image frame may include: constructing a robust kernel function of the vehicle pose, the previous feature point set and the current feature point set corresponding to the current image frame; and performing function optimization on the robust kernel function to obtain the corresponding vehicle pose when the robust kernel function takes the minimum value.
For the characteristic point matching process, the embodiment adopts a robust kernel function, and obtains the corresponding vehicle pose when the obtained robust kernel function takes the minimum value through a function optimization method, namely the vehicle pose corresponding to the current image frame.
S103: mapping a previous image frame to a coordinate system corresponding to the current image frame by combining with the vehicle pose to obtain a transformation image;
the step aims at realizing coordinate system conversion so as to map the previous image frame to the coordinate system where the current image frame is located, so as to obtain a converted image corresponding to the previous image frame, and the specific realization process can be realized through the coordinate system conversion.
Preferably, the mapping the previous image frame to the coordinate system corresponding to the current image frame to obtain the transformed image in combination with the pose of the vehicle may include: acquiring a pixel point set of a previous image frame based on an image coordinate system of the previous image frame; mapping the pixel point set to a vehicle coordinate system to obtain a vehicle coordinate set; mapping the vehicle coordinate set to a camera coordinate system to obtain a camera coordinate set; mapping the camera coordinate set to a world coordinate system to obtain a world coordinate set; mapping the world coordinate set to an image coordinate system corresponding to the current image frame to obtain a transformation coordinate set; a transformed image is obtained from the transformed coordinate set.
The application provides a specific coordinate system conversion method, namely, sequentially carrying out coordinate conversion from an image coordinate system to a vehicle coordinate system, from the vehicle coordinate system to a camera coordinate system, from the camera coordinate system to a world coordinate system and from the world coordinate system to a coordinate system corresponding to a current image frame on a pixel point set corresponding to a previous image frame, so as to obtain the pixel point set under the corresponding coordinate system of the corresponding current image frame, namely, the conversion coordinate set; further, the transformed image may be generated using a transformed coordinate set.
S104: and calculating pixel difference values of the current image frame and the transformed image, and marking pixels with the pixel difference values exceeding a standard threshold value as obstacle areas so as to determine obstacles.
The step aims to realize detection of the obstacle, and on the basis of obtaining the images after the current moment image frame and the previous moment image frame are converted, pixels corresponding to each other in the two images are calculated in pixel difference value, if the pixel difference value exceeds a preset standard threshold value, the position corresponding to the pixel is indicated to be the obstacle, and if the pixel difference value does not exceed the standard threshold value, the position is indicated to be not the obstacle, so that the obstacle detection is completed. The specific value of the standard threshold is preset by a technician according to actual conditions, and the application is not particularly limited.
As a preferred embodiment, the obstacle detecting method may further include: when the obstacle area exceeds the safety area, an alarm instruction is sent to alarm equipment
The embodiment aims to realize a safety warning function in the driving process, namely, real-time monitoring is carried out on the detected obstacle area, when the detected obstacle area exceeds the safety area, a warning instruction is immediately initiated to warning equipment so as to remind a driver of potential safety hazards currently, the vehicle is controlled to stop driving or avoid obstacle driving at a low speed, and for the unmanned vehicle, the unmanned vehicle can be directly responded to the control instruction to control the unmanned vehicle to stop driving or avoid obstacle driving at a low speed, so that the driving safety is ensured.
According to the obstacle detection method provided by the application, the image frames are continuously acquired through the camera, the transformed image of the image frame at the previous moment under the current moment image frame visual angle is obtained through feature point matching, coordinate conversion and the like, at the moment, pixel difference calculation is carried out on the transformed image and the current image frame, and pixels with the pixel difference exceeding the standard threshold value correspond to the obstacle, so that obstacle detection is completed; meanwhile, the method can realize obstacle detection of various scenes, has no limitation on the types of the obstacles, has strong applicability and high accuracy, and can effectively ensure driving safety.
On the basis of the above embodiments, the present application provides a more specific obstacle detection method, and before the obstacle detection process is described, the coordinate system involved in the obstacle detection process is first described in advance.
World coordinate system: since the camera can be placed at any position, a reference coordinate is selected in the environment to describe the position of the camera and is used to describe the position of any object in the environment, namely the world coordinate system. The coordinates of any point in the world coordinate system can be expressed as: p (P) W =[X W Y W Z W ] T Wherein the subscript w represents the world coordinate system,P W Represents any point in the world coordinate system, the coordinates of the point have three directions, namely a horizontal X axis, a horizontal Y axis and a vertical upward Z axis, and T represents transposition operation in the matrix.
Vehicle coordinate system: the ground point at the center of the vehicle is taken as an origin, the right front of the vehicle is in the Y-axis positive direction, the right of the vehicle is in the X-axis positive direction, and the direction perpendicular to the vehicle upwards is in the Z-axis positive direction. The coordinates of any point in the vehicle coordinate system can be expressed as: p (P) V =[X V Y V Z V ] T Wherein, the subscript v represents the vehicle coordinate system, P V Representing any one point in the vehicle coordinate system.
Camera coordinate system: the camera is located at the origin, the X-axis is rightward, the Z-axis is forward (i.e. towards the camera), and the Y-axis is upward (i.e. above the camera itself). The coordinates of any point in the camera coordinate system can be expressed as: p (P) C =[X C Y C Z C ] T Wherein, the subscript c represents the camera coordinate system, P C Representing any one point in the camera coordinate system.
Image coordinate system: each digital image is m×nm×n array in the computer, and the numerical value of each element (called pixel) in the image of MM row and NN column is the gray value of the image point. Rectangular coordinates u, vu, v are defined in the image, and coordinates (u, v) (u, v) of each pixel are the number of columns and rows of the pixel in the array, respectively, so (u, v) (u, v) is an image coordinate system coordinate in units of pixels.
Obstacle detection process:
step one, initializing the vehicle pose, namely initializing the vehicle pose to (0, 0).
Step two, the image frame corresponding to the k-1 moment (the previous moment) is I k-1 The feature point detection method is adopted to detect the feature points, the number of the selected feature points is M, i is used for representing the ith feature point, and then the coordinates of each feature point based on the image coordinate system are (u) i ,v i ) The set of M feature points is(previous frame feature point set), the corresponding world coordinate set is +.>
Step three, at the moment k (current moment), acquiring an image frame I at the current moment through a fisheye camera k The characteristic point detection method is adopted to detect the characteristic points, and the image frame I corresponding to the previous moment is selected k-1 Each feature point having a coordinate (u 'based on the image coordinate system' i ,v′ i ) The corresponding set of M feature points is(the current frame feature point set).
Step four, performing feature point matching on the two feature point sets, and setting an objective function as follows:
wherein P is Wi Coordinate information representing the ith feature point under the world coordinate system, and psi represents the internal parameters of the camera and the external parameters of the installation position and angle of the camera; xi represents the vehicle pose; f (P) Wi ψ, ζ) are with respect to ζ and P Wi A mapping of a point on the world coordinate system to the image coordinate system is achieved; huber is a robust kernel function:
wherein, delta is a robust kernel parameter, the smaller the value of delta is, the smaller the influence of the matched outer point on the optimized parameter is;
thus, the minimum value of the objective function is obtained by using a function optimization method, and the corresponding zeta and P of the minimum value are obtained Wi
Step five, the step five is that,for image frame I at the previous moment k-1 In other words, any one of the pixels based on the image coordinate system is expressed as (u) st ,v st ) Wherein s represents the width direction, the value range is 1-W, W is the image width, t represents the height direction, the value range is 1-H, H is the height direction, the pixel point set of the image frame based on the image coordinate system at the previous moment isConverting the coordinates to obtain a corresponding world coordinate set (X) Wi ,Y Wi ,Z Wi ) The specific coordinate conversion process is as follows:
(1) Assume that image frame I was at a previous time k-1 The pixel coordinate of a certain point based on the image coordinate system is (u, v), due to the corresponding Z on the road surface V =0, then the vehicle coordinate of the point based on the vehicle coordinate system is [ X ] V ,Y V ,0] T
(2) According to the following formula:
wherein f (ρ) =a 0 +a 1 ρ+a 2 ρ 2 +...+a n ρ n Is a projection function of the fish-eye camera,for taylor expansion coefficient, ρ=u 2 +v 2 ;R CV For rotation matrix, the rotation transformation from the vehicle coordinate system to the camera coordinate system is represented, T CV For a translation matrix, representing translation transformation from a vehicle coordinate system to a camera coordinate system; r is R CV ,T CV And->Are included in the parameter ψ;
thereby, the image frame I at the previous moment can be obtained k-1 Based on camera coordinate systemCamera coordinate set P of (2) C =[X C Y C Z C ] T
(3) Obtaining the image frame I at the previous moment according to the rotation and translation relation calculation of the camera coordinate system and the world coordinate system k-1 World coordinate set P based on world coordinate system W =[X W ,Y W ,Z W ] T
Step six, combining the xi and P obtained in the step four Wi Mapping the world coordinate set obtained in the fifth step to an image coordinate system to obtain an image frame I at the current moment k Image coordinate system-based calculation pixel point set
Step seven, directly acquiring the image frame I at the current time k Actual pixel point set based on image coordinate systemAnd D, carrying out pixel difference calculation on the pixel difference value and the calculated pixel point set obtained in the step six, and marking the pixels with the pixel difference value exceeding the standard threshold value as obstacles. Thereby, obstacle detection is achieved.
According to the obstacle detection method provided by the embodiment of the application, the image frames are continuously acquired directly through the camera, the transformed image of the image frame at the current moment in view angle is obtained through feature point matching, coordinate conversion and the like, at the moment, pixel difference calculation is carried out on the transformed image and the current image frame, and pixels with the pixel difference exceeding the standard threshold value correspond to the obstacle, so that obstacle detection is completed; meanwhile, the method can realize obstacle detection of various scenes, has no limitation on the types of the obstacles, has strong applicability and high accuracy, and can effectively ensure driving safety.
In order to solve the above-mentioned problems, please refer to fig. 2, fig. 2 is a schematic structural diagram of an obstacle detecting apparatus provided by the present application, the obstacle detecting apparatus may include:
an image acquisition module 10 for acquiring a current image frame through real-time image acquisition;
the pose calculating module 20 is configured to process the current image frame and the previous image frame through a preset matching algorithm, so as to obtain a vehicle pose corresponding to the current image frame;
the coordinate conversion module 30 is configured to map a previous image frame to a coordinate system corresponding to a current image frame in combination with a vehicle pose, so as to obtain a transformed image;
the obstacle determining module 40 is configured to calculate a pixel difference value between the current image frame and the transformed image, and mark pixels whose pixel difference value exceeds a standard threshold value as an obstacle area to determine an obstacle.
As a preferred embodiment, the pose calculating module 20 may include:
the first detection unit is used for detecting the characteristic points of the previous image frame to obtain a previous characteristic point set;
the second detection unit is used for detecting the characteristic points of the current image frame to obtain a current characteristic point set;
and the characteristic point matching unit is used for carrying out characteristic point matching on the previous characteristic point set and the current characteristic point set to obtain the vehicle pose corresponding to the current image frame.
As a preferred embodiment, the obstacle detection may further include:
and the alarm module is used for sending an alarm instruction to alarm equipment when the obstacle area exceeds the safety area.
For the description of the device provided by the present application, please refer to the above method embodiment, and the description of the present application is omitted herein.
In order to solve the above-mentioned problems, please refer to fig. 3, fig. 3 is a schematic structural diagram of an obstacle detecting apparatus provided by the present application, the obstacle detecting apparatus may include:
a memory 1 for storing a computer program;
a processor 2 for implementing any one of the above steps of the obstacle detection method when executing the computer program.
For the description of the apparatus provided by the present application, please refer to the above method embodiment, and the description of the present application is omitted herein.
In order to solve the above-mentioned problems, the present application also provides a computer readable storage medium having a computer program stored thereon, which when executed by a processor, implements the steps of any one of the above-mentioned obstacle detection methods.
The computer readable storage medium may include: a U-disk, a removable hard disk, a Read-Only Memory (ROM), a random access Memory (Random Access Memory, RAM), a magnetic disk, or an optical disk, or other various media capable of storing program codes.
For the description of the computer-readable storage medium provided by the present application, refer to the above method embodiments, and the disclosure is not repeated here.
In the description, each embodiment is described in a progressive manner, and each embodiment is mainly described by the differences from other embodiments, so that the same similar parts among the embodiments are mutually referred. For the device disclosed in the embodiment, since it corresponds to the method disclosed in the embodiment, the description is relatively simple, and the relevant points refer to the description of the method section.
Those of skill would further appreciate that the various illustrative elements and algorithm steps described in connection with the embodiments disclosed herein may be implemented as electronic hardware, computer software, or combinations of both, and that the various illustrative elements and steps are described above generally in terms of functionality in order to clearly illustrate the interchangeability of hardware and software. Whether such functionality is implemented as hardware or software depends upon the particular application and design constraints imposed on the solution. Skilled artisans may implement the described functionality in varying ways for each particular application, but such implementation decisions should not be interpreted as causing a departure from the scope of the present application.
The steps of a method or algorithm described in connection with the embodiments disclosed herein may be embodied directly in hardware, in a software module executed by a processor, or in a combination of the two. The software modules may be disposed in Random Access Memory (RAM), memory, read Only Memory (ROM), electrically programmable ROM, electrically erasable programmable ROM, registers, hard disk, a removable disk, a CD-ROM, or any other form of storage medium known in the art.
The method, apparatus, device and computer readable storage medium for detecting an obstacle according to the present application are described in detail above. The principles and embodiments of the present application have been described herein with reference to specific examples, the description of which is intended only to facilitate an understanding of the method of the present application and its core ideas. It should be noted that it will be apparent to those skilled in the art that various changes and modifications can be made herein without departing from the principles of the application, which also falls within the spirit and scope of the application as defined by the appended claims.

Claims (9)

1. An obstacle detection method, comprising:
acquiring a current image frame through real-time image acquisition;
processing the current image frame and the previous image frame through a preset matching algorithm to obtain a vehicle pose corresponding to the current image frame;
mapping the previous image frame to a coordinate system corresponding to the current image frame by combining the vehicle pose to obtain a transformation image;
calculating pixel difference values of the current image frame and the transformation image, and marking pixels with the pixel difference values exceeding a standard threshold value as obstacle areas so as to determine obstacles;
wherein, combining the vehicle pose, mapping the previous image frame to the coordinate system corresponding to the current image frame to obtain a transformed image, including:
acquiring a pixel point set of the previous image frame based on an image coordinate system of the previous image frame;
mapping the pixel point set to a vehicle coordinate system to obtain a vehicle coordinate set;
mapping the vehicle coordinate set to a camera coordinate system to obtain a camera coordinate set;
mapping the camera coordinate set to a world coordinate system to obtain a world coordinate set;
mapping the world coordinate set to an image coordinate system corresponding to the current image frame to obtain a transformation coordinate set;
the transformed image is obtained from the transformed coordinate set.
2. The obstacle detection method as claimed in claim 1, wherein the processing the current image frame and the previous image frame by a preset matching algorithm to obtain a vehicle pose corresponding to the current image frame comprises:
detecting the characteristic points of the previous image frame to obtain a previous characteristic point set;
detecting characteristic points of the current image frame to obtain a current characteristic point set;
and carrying out feature point matching on the previous feature point set and the current feature point set to obtain the vehicle pose corresponding to the current image frame.
3. The obstacle detection method as claimed in claim 2, wherein the performing feature point matching on the previous feature point set and the current feature point set to obtain a vehicle pose corresponding to the current image frame includes:
constructing a robust kernel function related to the vehicle pose corresponding to the current image frame, the previous feature point set and the current feature point set;
and performing function optimization on the robust kernel function to obtain the corresponding vehicle pose when the robust kernel function takes the minimum value.
4. A method of detecting an obstacle as claimed in any one of claims 1 to 3, further comprising:
and when the obstacle area exceeds the safety area, sending an alarm instruction to alarm equipment.
5. An obstacle detecting apparatus, comprising:
the image acquisition module is used for acquiring a current image frame through real-time image acquisition;
the pose calculating module is used for processing the current image frame and the previous image frame through a preset matching algorithm to obtain the vehicle pose corresponding to the current image frame;
the coordinate conversion module is used for combining the vehicle pose, mapping the previous image frame to a coordinate system corresponding to the current image frame, and obtaining a transformation image;
the obstacle determining module is used for calculating pixel difference values of the current image frame and the transformation image, and marking pixels with the pixel difference values exceeding a standard threshold value as obstacle areas so as to determine obstacles;
the coordinate conversion module is specifically used for acquiring a pixel point set of the previous image frame based on an image coordinate system of the previous image frame; mapping the pixel point set to a vehicle coordinate system to obtain a vehicle coordinate set; mapping the vehicle coordinate set to a camera coordinate system to obtain a camera coordinate set; mapping the camera coordinate set to a world coordinate system to obtain a world coordinate set; mapping the world coordinate set to an image coordinate system corresponding to the current image frame to obtain a transformation coordinate set; the transformed image is obtained from the transformed coordinate set.
6. The obstacle detection device as claimed in claim 5, wherein the pose calculation module includes:
the first detection unit is used for detecting the characteristic points of the previous image frame to obtain a previous characteristic point set;
the second detection unit is used for detecting the characteristic points of the current image frame to obtain a current characteristic point set;
and the characteristic point matching unit is used for matching the characteristic points of the previous characteristic point set and the current characteristic point set to obtain the vehicle pose corresponding to the current image frame.
7. The obstacle detecting apparatus as claimed in claim 5 or 6, further comprising:
and the alarm module is used for sending an alarm instruction to alarm equipment when the obstacle area exceeds the safety area.
8. An obstacle detecting apparatus, characterized by comprising:
a memory for storing a computer program;
a processor for implementing the steps of the obstacle detection method according to any one of claims 1 to 4 when executing the computer program.
9. A computer-readable storage medium, wherein a computer program is stored on the computer-readable storage medium, which when executed by a processor, implements the steps of the obstacle detection method as claimed in any one of claims 1 to 4.
CN201911293303.6A 2019-12-16 2019-12-16 Obstacle detection method, device, equipment and computer readable storage medium Active CN111046809B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201911293303.6A CN111046809B (en) 2019-12-16 2019-12-16 Obstacle detection method, device, equipment and computer readable storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201911293303.6A CN111046809B (en) 2019-12-16 2019-12-16 Obstacle detection method, device, equipment and computer readable storage medium

Publications (2)

Publication Number Publication Date
CN111046809A CN111046809A (en) 2020-04-21
CN111046809B true CN111046809B (en) 2023-09-12

Family

ID=70236722

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201911293303.6A Active CN111046809B (en) 2019-12-16 2019-12-16 Obstacle detection method, device, equipment and computer readable storage medium

Country Status (1)

Country Link
CN (1) CN111046809B (en)

Families Citing this family (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112581527B (en) * 2020-12-11 2024-02-27 北京百度网讯科技有限公司 Evaluation method, device, equipment and storage medium for obstacle detection
CN112883909B (en) * 2021-03-16 2024-06-14 东软睿驰汽车技术(沈阳)有限公司 Obstacle position detection method and device based on bounding box and electronic equipment
CN114419580A (en) * 2021-12-27 2022-04-29 北京百度网讯科技有限公司 Obstacle association method and device, electronic equipment and storage medium

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104299244A (en) * 2014-09-26 2015-01-21 东软集团股份有限公司 Obstacle detection method and device based on monocular camera
CN110084133A (en) * 2019-04-03 2019-08-02 百度在线网络技术(北京)有限公司 Obstacle detection method, device, vehicle, computer equipment and storage medium

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104299244A (en) * 2014-09-26 2015-01-21 东软集团股份有限公司 Obstacle detection method and device based on monocular camera
CN110084133A (en) * 2019-04-03 2019-08-02 百度在线网络技术(北京)有限公司 Obstacle detection method, device, vehicle, computer equipment and storage medium

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
王晓彤.基于单目视觉的扫地机器人定位算法设计及实现.中国优秀硕士学位论文全文数据库.2018,正文第2-3章. *

Also Published As

Publication number Publication date
CN111046809A (en) 2020-04-21

Similar Documents

Publication Publication Date Title
CN109435942B (en) Information fusion-based parking space line and parking space recognition method and device
CN111046809B (en) Obstacle detection method, device, equipment and computer readable storage medium
CN112967345B (en) External parameter calibration method, device and system of fish-eye camera
CN111897349A (en) Underwater robot autonomous obstacle avoidance method based on binocular vision
CN108269281B (en) Obstacle avoidance technical method based on binocular vision
CN115100423B (en) System and method for realizing real-time positioning based on view acquisition data
CN116193108B (en) Online self-calibration method, device, equipment and medium for camera
CN113516711A (en) Camera pose estimation techniques
EP3675041B1 (en) Method and apparatus for determining road information, and vehicle
CN112509054A (en) Dynamic calibration method for external parameters of camera
CN110246187A (en) A kind of camera internal reference scaling method, device, equipment and readable storage medium storing program for executing
CN116358486A (en) Target ranging method, device and medium based on monocular camera
CN113034583A (en) Vehicle parking distance measuring method and device based on deep learning and electronic equipment
CN115902977A (en) Transformer substation robot double-positioning method and system based on vision and GPS
CN112241717B (en) Front vehicle detection method, and training acquisition method and device of front vehicle detection model
CN111736137B (en) LiDAR external parameter calibration method, system, computer equipment and readable storage medium
CN116052120A (en) Excavator night object detection method based on image enhancement and multi-sensor fusion
CN116142172A (en) Parking method and device based on voxel coordinate system
JP6492603B2 (en) Image processing apparatus, system, image processing method, and program
CN112580402B (en) Monocular vision pedestrian ranging method and system, vehicle and medium thereof
CN113643359A (en) Target object positioning method, device, equipment and storage medium
CN111626180A (en) Lane line detection method and device based on polarization imaging
CN104866817A (en) Statistical Hough transform lane detection method based on gradient constraint
CN118226421B (en) Laser radar-camera online calibration method and system based on reflectivity map
CN114724118B (en) Zebra crossing detection method, device, equipment and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
CB02 Change of applicant information
CB02 Change of applicant information

Address after: 215347 7th floor, IIR complex, 1699 Weicheng South Road, Kunshan City, Suzhou City, Jiangsu Province

Applicant after: Kunshan Microelectronics Technology Research Institute

Address before: 215347 7th floor, complex building, No. 1699, Zuchongzhi South Road, Kunshan City, Suzhou City, Jiangsu Province

Applicant before: KUNSHAN BRANCH, INSTITUTE OF MICROELECTRONICS OF CHINESE ACADEMY OF SCIENCES

GR01 Patent grant
GR01 Patent grant