[go: up one dir, main page]
More Web Proxy on the site http://driver.im/

GB2625262A - Vehicle, control device, and method for evaluating a calibration of one or more cameras mounted to a vehicle - Google Patents

Vehicle, control device, and method for evaluating a calibration of one or more cameras mounted to a vehicle Download PDF

Info

Publication number
GB2625262A
GB2625262A GB2218436.0A GB202218436A GB2625262A GB 2625262 A GB2625262 A GB 2625262A GB 202218436 A GB202218436 A GB 202218436A GB 2625262 A GB2625262 A GB 2625262A
Authority
GB
United Kingdom
Prior art keywords
image
lateral
camera
vehicle
point
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
GB2218436.0A
Other versions
GB202218436D0 (en
Inventor
Madhusudan Rao Bellary
Kuruba Naveen
C Manjunath M
T R Bineesh
Pattanaik Abhisek
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Continental Autonomous Mobility Germany GmbH
Original Assignee
Continental Autonomous Mobility Germany GmbH
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Continental Autonomous Mobility Germany GmbH filed Critical Continental Autonomous Mobility Germany GmbH
Priority to GB2218436.0A priority Critical patent/GB2625262A/en
Publication of GB202218436D0 publication Critical patent/GB202218436D0/en
Priority to PCT/EP2023/082644 priority patent/WO2024120823A1/en
Publication of GB2625262A publication Critical patent/GB2625262A/en
Pending legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/80Analysis of captured images to determine intrinsic or extrinsic camera parameters, i.e. camera calibration
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B60VEHICLES IN GENERAL
    • B60WCONJOINT CONTROL OF VEHICLE SUB-UNITS OF DIFFERENT TYPE OR DIFFERENT FUNCTION; CONTROL SYSTEMS SPECIALLY ADAPTED FOR HYBRID VEHICLES; ROAD VEHICLE DRIVE CONTROL SYSTEMS FOR PURPOSES NOT RELATED TO THE CONTROL OF A PARTICULAR SUB-UNIT
    • B60W50/00Details of control systems for road vehicle drive control not related to the control of a particular sub-unit, e.g. process diagnostic or vehicle driver interfaces
    • B60W50/08Interaction between the driver and the control system
    • B60W50/14Means for informing the driver, warning the driver or prompting a driver intervention
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/10Image acquisition
    • G06V10/12Details of acquisition arrangements; Constructional details thereof
    • G06V10/14Optical characteristics of the device performing the acquisition or on the illumination arrangements
    • G06V10/147Details of sensors, e.g. sensor lenses
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/74Image or video pattern matching; Proximity measures in feature spaces
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/50Context or environment of the image
    • G06V20/56Context or environment of the image exterior to a vehicle by using sensors mounted on the vehicle
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B60VEHICLES IN GENERAL
    • B60RVEHICLES, VEHICLE FITTINGS, OR VEHICLE PARTS, NOT OTHERWISE PROVIDED FOR
    • B60R2300/00Details of viewing arrangements using cameras and displays, specially adapted for use in a vehicle
    • B60R2300/40Details of viewing arrangements using cameras and displays, specially adapted for use in a vehicle characterised by the details of the power supply or the coupling to vehicle components
    • B60R2300/402Image calibration
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B60VEHICLES IN GENERAL
    • B60WCONJOINT CONTROL OF VEHICLE SUB-UNITS OF DIFFERENT TYPE OR DIFFERENT FUNCTION; CONTROL SYSTEMS SPECIALLY ADAPTED FOR HYBRID VEHICLES; ROAD VEHICLE DRIVE CONTROL SYSTEMS FOR PURPOSES NOT RELATED TO THE CONTROL OF A PARTICULAR SUB-UNIT
    • B60W50/00Details of control systems for road vehicle drive control not related to the control of a particular sub-unit, e.g. process diagnostic or vehicle driver interfaces
    • B60W50/08Interaction between the driver and the control system
    • B60W50/14Means for informing the driver, warning the driver or prompting a driver intervention
    • B60W2050/146Display means
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10016Video; Image sequence
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30248Vehicle exterior or interior
    • G06T2207/30252Vehicle exterior; Vicinity of vehicle

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Multimedia (AREA)
  • Health & Medical Sciences (AREA)
  • General Health & Medical Sciences (AREA)
  • Automation & Control Theory (AREA)
  • Transportation (AREA)
  • Human Computer Interaction (AREA)
  • Mechanical Engineering (AREA)
  • Vascular Medicine (AREA)
  • Artificial Intelligence (AREA)
  • Computing Systems (AREA)
  • Databases & Information Systems (AREA)
  • Evolutionary Computation (AREA)
  • Medical Informatics (AREA)
  • Software Systems (AREA)
  • Closed-Circuit Television Systems (AREA)

Abstract

Calibration of plurality of cameras, comprising: Receiving, from a longitudinal camera (102, 104) of a vehicle, a first longitudinal image showing a surrounding in front of the vehicle or behind the vehicle at a first point in time; Receiving, from a lateral camera (106, 108) of a vehicle, a first lateral image showing a surrounding in a lateral direction (16, 18) next to the vehicle at the first point in time; Receiving, from the lateral camera, a second lateral image showing the surrounding in the lateral direction next to the vehicle at the second point in time, wherein a portion of wherein a portion of the second lateral image shows a scene, at a first point in time, is, showing in the first longitudinal image and the first lateral image; Generating, using the first longitudinal image and the first lateral image, a first image showing a top view on the scene at the first point in time; Generating, using the second lateral image, a second image showing a top view on the scene at the second point in time; Determining, whether the longitudinal camera and the lateral camera are calibrated correctly by comparing the first image and the second image.

Description

VEHICLE, CONTROL DEVICE, AND METHOD FOR EVALUATING A CALIBRATION OF ONE OR MORE CAMERAS MOUNTED TO A VEHICLE
TECHNICAL FIELD
Various aspects of this disclosure relate to a vehicle, a control device for use in a vehicle, and a method for evaluating a calibration of one or more cameras mounted to the vehicle, such as quantitatively assessing a quality of a camera calibration.
BACKGROUND
The following discussion of the background art is intended to facilitate an understanding of the present disclosure only. It should be appreciated that the discussion is not an acknowledgment or admission that any of the material referred to was published, known, or is part of the common general knowledge of the person skilled in the art in any jurisdiction as of the priority date of the disclosure.
An on-board control system of a vehicle may provide various driving assistance functions, such as a surround view (e.g., a top view), a park assist, a lane departure warning system, etc., which employ sensors such as cameras for detecting a surrounding of the vehicle. When the vehicle is in motion (e.g., driving), a vehicle body and, thus, also the sensors (e.g., cameras) may change their orientation and/or position. Therefore, it may be required (e.g., to fulfill safety requirements) to re-calibrate the sensors (e.g., cameras) online (i.e., during use of the vehicle). For example, one or more sensor calibration parameters may be re-computed (e.g., in real-time) during driving to compensate for the above mentioned changes (e.g., using an online calibration algorithm). Here, it may be necessary and/or desirable to evaluate the sensor calibration (e.g., to determine whether a sensor is calibrated correctly or should be re-calibrated).
SUMMARY
Various aspects relate to a vehicle, a control device for use in a vehicle, and a method for evaluating a respective calibration of one or more sensors (e.g., cameras).
According to various aspects, a vehicle may include: a longitudinal camera configured to capture a longitudinal image showing a surrounding in front of the vehicle or behind the vehicle; a lateral camera configured to capture a lateral image showing a surrounding in a lateral direction next to the vehicle, wherein the io longitudinal camera and the lateral camera have a partially overlapping field-of-view such that a portion of the longitudinal image and a portion of the lateral image show a same region in the surrounding of the vehicle; one or more processors configured to: receive, from the longitudinal camera, a first longitudinal image showing the surrounding in front of or behind the vehicle at a first point in time; receive, from the lateral camera, a first lateral image showing the surrounding in the lateral direction next to the vehicle at the first point in time; receive, from the lateral camera, a second lateral image showing the surrounding in the lateral direction next to the vehicle at a second point in time different from the first point in time, wherein a portion of the second lateral image shows a scene which is, at the first point in time, shown in the first longitudinal image and the first lateral image; generate, using the first longitudinal image and the first lateral image, a first image showing a top view on the scene at the first point in time; generate, using the second lateral image, a second image showing a top view on the scene at the second point in time; and determine, whether the longitudinal camera and the lateral camera are calibrated correctly by comparing the first image with the second image.
According to various aspects, a method for evaluating a calibration of one or more cameras mounted to a vehicle is provided which may include: receiving, from a longitudinal camera, a first longitudinal image showing a surrounding in front of the vehicle or behind the vehicle at a first point in time; receiving, from a lateral camera, a first lateral image showing a surrounding in a lateral direction next to the vehicle at the first point in time; receiving, from the longitudinal camera, a second longitudinal image showing the surrounding in front of or behind the vehicle at a second point in time different from the first point in time; receiving, from the lateral camera, a second lateral image showing the surrounding in the lateral direction next to the vehicle at the second point in time, wherein a portion of the second lateral image shows a scene which is not shown in the second longitudinal image and which is, at the first point in time, shown in the first longitudinal image and the first lateral image; generating, using the first longitudinal image and the first lateral image, a first image showing a top view on the scene at the first point in time; generating, using the second lateral image, a second image showing a top view on the scene at the second point in time; and determining, whether the longitudinal camera and the io lateral camera are calibrated correctly by comparing the first image with the second image.
BRIEF DESCRIPTION OF THE DRAWINGS
The present disclosure will be better understood with reference to the detailed description when considered in conjunction with the non-limiting examples and the accompanying drawings, in which: FIG.1A shows a vehicle according to various aspects; FIG.1B shows an exemplary top-view on the vehicle and its surrounding according to various aspects; FIG.2A to FIG.2D each show various aspects of collecting data for evaluating a calibration of one or more cameras of the vehicle; and FIG.3 shows a flow diagram of a method for evaluating a calibration of at least one camera according to various aspects.
DETAILED DESCRIPTION
The following detailed description refers to the accompanying drawings that show, by way of illustration, specific details and embodiments in which the disclosure may be practiced. These embodiments are described in sufficient detail to enable those skilled in the art to practice the disclosure. Other embodiments may be utilized and structural, and logical changes may be made without departing from the scope of the disclosure. The various embodiments are not necessarily mutually exclusive, as some embodiments can be combined with one or more other embodiments to form new embodiments.
The embodiments described in the context of a vehicle, one of the devices, or methods are analogously valid for the other vehicles, devices, systems, or methods.
Similarly, the embodiments described in the context of a device are analogously valid for a system (e.g., a vehicle) and/or a method, and vice-versa.
Features that are described in the context of an embodiment may correspondingly be applicable to the same or similar features in the other embodiments. Features io that are described in the context of an embodiment may correspondingly be applicable to the other embodiments, even if not explicitly described in these other embodiments. Furthermore, additions and/or combinations and/or alternatives as described for a feature in the context of an embodiment may correspondingly be applicable to the same or similar feature in the other embodiments.
In the context of the various embodiments, the articles "a", "an", and "the" as used with regard to a feature or element include a reference to one or more of the features or elements.
As used herein, the term "and/or" includes any and all combinations of one or more of the associated listed items.
While terms such as "first", "second" etc., may be used to describe various devices, such cameras, are not limited by the above terms. The above terms are used only to distinguish one device from another, and do not define an order and/or significance of the devices.
The term "data" as used herein may be understood to include information in any suitable analog or digital form, e.g., provided as a file, a portion of a file, a set of files, a signal or stream, a portion of a signal or stream, a set of signals or streams, and the like. Further, the term "data" may also be used to mean a reference to information, e.g., in form of a pointer. The term "data", however, is not limited to the aforementioned examples and may take various forms and represent any information as understood in the art. Any type of information, as described herein, may be handled for example via one or more processors in a suitable way, e.g. as data.
The terms "processor" or "controller" as, for example, used herein may be understood as any kind of entity that allows handling data. The data may be handled according to one or more specific functions executed by the processor or controller. Further, a processor or controller as used herein may be understood as any kind of circuit, e.g., any kind of analog or digital circuit. A processor or a controller may thus be or include an analog circuit, digital circuit, mixed-signal circuit, logic circuit, processor, microprocessor, Central Processing Unit (CPU), Graphics Processing Unit (GPU), Digital Signal Processor (DSP), Field Programmable Gate Array (FPGA), integrated circuit, Application Specific Integrated Circuit (ASIC), etc., or any Jo combination thereof. Any other kind of implementation of the respective functions, which will be described below in further detail, may also be understood as a processor, controller, or logic circuit. It is understood that any two (or more) of the processors, controllers, or logic circuits detailed herein may be realized as a single entity with equivalent functionality or the like, and conversely that any single processor, controller, or logic circuit detailed herein may be realized as two (or more) separate entities with equivalent functionality or the like.
The term "memory" detailed herein may be understood to include any suitable type of memory or memory device, e.g a hard disk drive (HDD), a solid-state drive (SSD), a flash memory, etc. Various aspects relate to a control device for use in a vehicle. For example, the control device may be configured to control one or more functions (e.g., an operation) of the vehicle. The control device may be capable to evaluate a calibration of one or more sensors of the vehicle. Thus, the control device may part of a sensor control (e.g., monitoring) system and/or a driver assistance system and/or a safety system. Although various aspects refer to using the control device in the vehicle itself, it is understood that the processing described herein may also be carried out external to the vehicle (e.g., in a cloud computing device).
As an example, the vehicle may include four cameras (e.g., a front camera, a rear camera, a left-lateral camera, and a right-lateral camera) for capturing images of a complete surrounding around the vehicle and a control device of the vehicle may be configured to determine, based on the captured images of at least two cameras of the four cameras, whether the at least two cameras are calibrated correctly (or whether they require a re-calibration). As described herein, the control device is capable to evaluate the camera calibration without requiring any additional devices On addition to the at least two (e.g., four) cameras).
As an illustrative example on how to evaluate the calibration of two cameras: The vehicle may include a front camera for capturing a front image of a surrounding in front of the vehicle, a rear camera for capturing a surrounding behind the vehicle, a left-lateral camera for capturing a left-lateral image of a surrounding left to the vehicle, and a right-lateral camera for capturing a right-lateral image of a surrounding right to the vehicle. The control device may continuously receive these images with a Jo predefined frame rate and may be configured to generate, at various instances in time, a respective top-view image showing a top view on the vehicle and its surrounding (e.g., to assist parking) at a specific point in time. As example, the front camera and the left-lateral camera may have a partially overlapping field of view, such that a portion of their captured images shows a same region in the surrounding of the vehicle (from different perspective). Thus, the control device may generate a portion of the top-view image by merging (e.g., stitching) information from both, the front image and the left-lateral image. In the case that the vehicle is driving in a forward direction in a substantially straight course, the vehicle may pass a specific scene which is first in this same region and later on left next to the vehicle outside this same region. The control device may generate a first top-view image which shows the scene in the merged portion and a second top-view image which shows the same scene outside the merged portion. The control device may use the second top-view image as ground truth image for the first top-view image to evaluate the calibration of the front camera and the left-lateral camera. For example, the control device compare the first top-view image with the second top-view image and, in the case that a difference between them is less than a predefined threshold, determine that the front camera and the left-lateral camera are calibrated correctly.
FIG.1A shows a vehicle 100 according to various aspects. The vehicle 100 may include at least two sensors configured to detect a surrounding of the vehicle 100.
The at least two sensors may have a partially overlapping field-of-view. The vehicle 100 may include one or more processors 110. For example, the vehicle 100 may include the control device and the control device may include the one or more processors 110. The one or more processors 110 may be configured to receive respective sensor data from the at least two sensors at various instances of time (e.g., continuously) and to evaluate a calibration of the at least two sensors using the received sensor data. The one or more processors 110 may be configured to generate, at one or more (e.g., each) instances of time a respective top-view image showing a top view on the surrounding of the vehicle 100 (and optionally also a top view on the vehicle 100 itself). The one or more processors 110 may be configured to evaluate the calibration of the at least two sensors using the generated top-view images. In the following, the at least two sensors are described as being a respective camera configured to capture image(s) of the surrounding of the vehicle 100.
However, it is understood that this serves for illustrating the principles of evaluating a calibration of a sensor in accordance with the disclosure and that one or more sensors of the at least two sensors may also be other sensors (e.g., a radio detection and ranging (radar) sensor, a light detection and ranging (lidar) sensor, etc.) as long as the herein described constraints are fulfilled (e.g., that at least two sensors have a partially overlapping field of view).
As shown in FIG.1A, the vehicle 100 may include a front camera 102 (may also be referred to as a (first) longitudinal camera or a front-view camera), a rear camera 104 (may also be referred to as a (second) longitudinal camera or a rear-view camera), a left-lateral camera 106 (may also be referred to as a (first) lateral camera or a left-lateral view camera), and a right-lateral camera 108 (may also be referred to as a (second) lateral camera or a right-lateral view camera). Although the vehicle 100 is described as having four cameras, it is understood that, using the herein-described principles, a calibration of a camera can be evaluated as long as there are at least two cameras (having a partially overlapping field of view).
A camera, as described herein, may be any kind of (camera) device capable to capture an image of the surrounding of the vehicle 100. Hence, a camera may also be referred to as visual sensor, image sensor, or camera device. It is understood that a camera may be configured to provide a captured image directly or indirectly (e.g., by storing in a memory device) to the one or more processors 110. Further, it is understood that a camera may be configured to output an unprocessed image or a pre-processed image. A camera may be configured to capture a respective image of the (respective) surrounding of the vehicle 100 with a predefined frame rate. A camera as described herein may be a wide-angle camera (e.g., having an aperture angle equal to or greater than 120°). The front camera 102 may be configured to capture (e.g., at each instance in accordance with the frame rate of the camera) a front image showing a surrounding in front of the vehicle 100 (e.g., in front of the vehicle 100 with respect to a forward driving direction 12). The rear camera 104 may be configured to capture (e.g., at each instance in accordance with the frame rate of the camera) a rear image showing a surrounding behind the vehicle 100 (e.g., in front of the vehicle 100 with respect to a backward driving direction 14 (opposite to ro the forward driving direction)). The left-lateral camera 106 may be configured to capture (e.g., at each instance in accordance with the frame rate of the camera) a left-lateral image showing a surrounding left the vehicle 100 (e.g., left of the vehicle 100 with respect to the forward driving direction 12). The right-lateral camera 108 may be configured to capture (e.g., at each instance in accordance with the frame rate of the camera) a right-lateral image showing a surrounding right the vehicle 100 (e.g., left of the vehicle 100 with respect to the forward driving direction 12). The left-lateral 106 may have a first partially overlapping field-of view with the front camera 102 and a second partially overlapping field-of view (different from the first partially overlapping field-of view) with the rear camera 104. Hence, a first portion of the field-of-view (fov) of the left-lateral camera 106 may overlap with a portion of the fov of the front camera 102, a second portion of the fov of the left-lateral camera 106 may overlap with a portion of the fov of the rear camera 104, and a third portion of the fov of the left-lateral camera 106 may neither overlap with the fov of the front camera and the fov of the rear camera. This applies similarly to the right-lateral camera 108, such that the right-lateral camera 108 has a partially overlapping fov with the front camera 102 and a partially overlapping fov with the rear camera 104. A partially overlapping field of view, as described herein, is understood to mean no completely overlapping fov.
It is understood that the respective nominal position of the front camera 102, the rear camera 104, the left-lateral camera 106, and the right-lateral camera 108 may be stored (e.g., as part of computer aided design (CAD) data associated with the vehicle 100) in a memory device and the one or more processors 110 use the information on the memory device when processing. Further, the memory device may store various (e.g., intrinsic) parameters of the cameras. The left-lateral camera 106 may be mounted to a left wing mirror of the vehicle 100. The right-lateral camera 108 may be mounted to a right wing mirror of the vehicle 100.
The vehicle 100 may be any kind of vehicle which employs sensors with partially overlapping field of views to detect a surrounding of the vehicle. The sensors (e.g., cameras) may be part of a surround-view system (SVS) configured to provide (e.g., to output via a display device and/or for further processing) several views of the vehicle 100, such as a top view, a rear view, a panorama view, etc. io As an example, the vehicle 100 may be a non-autonomous vehicle which includes a display device to show a top-view image of the vehicle and its surrounding to a driver of the vehicle to assist parking (e.g., shows and/or alerts of obstacles in the surrounding of the vehicle 100). As another example, the vehicle 100 may be a semiautonomous vehicle or an autonomous vehicle which has an auto-park function (i.e., to park the vehicle without requiring the driver to steer and/or accelerate/decelerate) using the sensors to detect the surrounding of the vehicle (e.g., using the top-view image). An auto-park function may include an assisted parking while the driver is inside the vehicle 100, a remote parking without the driver having to be inside the vehicle 100, and/or a valet parking.
A "vehicle" may be a ground vehicle (e.g., a vehicle configured to drive on ground (e.g., on a road, a track, a street, etc.)), an aerial vehicle (e.g., a vehicle configured to being maneuvered above the ground), or an aquatic vehicle (e.g., a vehicle capable of being maneuvered on or below the surface of liquid (e.g., water)). It is understood that the vehicle 100 may include various further components common for the respective type of vehicle (e.g., an engine, a steering system, etc.).
FIG.1 B shows an exemplary top-view on the vehicle 100 and its surrounding according to various aspects. As described herein, each of the front camera 102, the rear camera 104, the left-lateral camera 106, and the right-lateral camera 108 may be configured to capture a respective image with a (respective) predefined frame rate. The frame rates may be synchronized to each other. The one or more processors 110 may be configured to receive a front image from the front camera 102, a rear image from the rear camera 104, a left-lateral image from the left-lateral camera, and a right-lateral image from the right-lateral camera 108 all associated with a substantially same point in time, t, and to generate, using the front image, the rear image, the left-lateral image, and the right-lateral image, a top-view image Ot associated with this point in time, t. For example, FIG.1 B may represent an exemplary top-view image, Ot, associated with a specific point in time, t. The top-view image, Ot, may have a front portion 112 which represents the surrounding in front of the vehicle 100 and which includes information captured by the front camera 102 and no information from the rear camera 104, the left-lateral camera 106, and the right-lateral camera 108. The top-view image, Ot, may have a rear portion 114 which represents the surrounding behind the vehicle 100 and which includes information captured by the rear camera 104 and no information from the front camera 102, the left-lateral camera 106, and the right-lateral camera 108. The top-view image, Ot, may have a left-lateral portion 116 which represents the surrounding left to the vehicle 100 and which includes information captured by the left-lateral camera 106 and no information from the front camera 102, the rear camera 104, and the right-lateral camera 108. The top-view image, O. , may have a right-lateral portion 118 which represents the surrounding right to the vehicle 100 and which includes information captured by the right-lateral camera 108 and no information from the front camera 102, the rear camera 104, and the left-lateral camera 106. As described herein, each camera pair may have a partially overlapping fov. Therefore, the one or more processors 110 may be configured to employ information from both cameras of a camera pair when generating the top-view image, O. . Hence, the top-view image, Ot, may have a front-left portion 120 which includes information captured by the front camera 102 and the left-lateral camera 106, a front-right portion 122 which includes information captured by the front camera 102 and the right-lateral camera 108, a rear-left portion 124 which includes information captured by the rear camera 104 and the left-lateral camera 106, and a rear-right portion 126 which includes information captured by the rear camera 104 and the right-lateral camera 108.
In the following, various aspects of evaluating a respective calibration of the front camera 102, the rear camera 104, the left-lateral camera 106, and/or the right-lateral camera 108 are described with reference to FIG.2A to FIG.2D.
As illustrated in FIG.2A, the vehicle 100 may drive (e.g., with a substantially straight course) in forward direction 12 passing a region 200. At a first point in time (in some aspects a point in time is referred to as instance of time), t = i, the region 200 may be located front-left to the vehicle 100 and therefore be shown in the front image captured by the front camera 102 and in the left-lateral image captured by the left-lateral camera 106 at the first point in time, t = i. The one or more processors 110 may generate a corresponding top-view image, Ot=i. At a second point in time, t = i+FO, when driving in the forward direction 12, the region 200 may be located left to the vehicle 100 and therefore be shown only in the left-lateral image captured by the left-lateral camera 106 at the second point in time, t = ii-FO. The one or more processors 110 may generate a corresponding top-view image, Cr i+F° . At a third point in time, t = i+FO+RO, the region 200 may be located rear-left to the vehicle and therefore be shown in the rear image captured by the rear camera 104 and in the left-lateral image captured by the left-lateral camera 106 at the third point in time, t = i+FO+RO. The one or more processors 110 may generate a corresponding top-view image, 0t=1+FO-FR0 Hence, a same region in the surrounding of the vehicle 100 may be represented by various top-view images, O. . It is understood that this applies similarly to a region located on the right side of the vehicle 100. Although this example is described for the case that the vehicle 100 is driving in forward direction 12, the herein described principles apply analoguesly for driving in backward direction 14. As understood, in the case of driving backward, the second point in time, t = i+FO, is prior to the first point in time, t = i (i.e., FO<O) and the third point in time, t = i+FO+RO, is prior to the second point in time, t = ii-FO (i.e., RO<O).
As described, at the first point in time, t = i, the region 200 may be located in the front-left portion 120 (independent of driving in forward direction or backward direction). With reference to FIG.2B, the one or more processors 110 may be configured to generate (e.g., to cut out) a front-left image FL(0t=1) (may also be referred to as front-left patch) showing a first scene (e.g., in region 200) at the first point in time, t = i, and/or a front-right image FR(0t1) (may also be referred to as front-right patch) showing a second scene (different from the first scene) at the first point in time, t = i. The one or more processors 110 may be configured to generate i (e.g., to cut out) a left image L(0-t+FO= ) (may also be referred to as left ground truth patch) showing the first scene at the second point in time, t = i+FO, and/or a right image R(Ot=i+FO) (may also be referred to as right ground truth patch) showing the second scene at the second point in time, t = i-i-FO. The one or more processors 110 may be configured to generate (e.g., to cut out) a rear-left image RL(Ot= i+FO+RO) (may also be referred to as rear-left patch) showing the first scene at the third point 1» in time, t = i+FO+RO, and/or a rear-right image RR(Ot=i+FO+RO) (may also be referred to as rear-right patch) showing the second scene at the third point in time, t = i+FO+RO.
According to various aspects, the left image L(Ot=i+F° ) may be symmetric with an optical axis of the left-lateral camera 106 and/or the right image R(Ot=i +FO) may be symmetric with an optical axis of the right-lateral camera 108.
According to various aspects, since the left image L(Ot=i+F) is generated solely from the left-lateral image (and not stitched with the front image or rear image), the left image L(Ot=itF° ) may be a left ground truth image representing a ground truth of how the first scene should look like. For the case that the front camera 102 and the left-lateral camera 106 are calibrated correctly, it is expected (e.g., in the case of uncorrupted real-world data) that the front-left image FL(0t1) (which also shows the first scene) corresponds substantially to the left ground truth image, i.e., the left image L(Ot=i+F° ). Similarly, for the case that the rear camera 104 and the left-lateral camera 106 are calibrated correctly, it is expected that the rear-left image FL(0t) corresponds substantially to the left ground truth image, i.e., the left image L(Ot=i+F° ). This applies analogously to the right side, such that, in the case of correctly calibrated cameras, the front-right image FR(0t) and the rear-right image RR(Ot=i+FO+RO) are expected to correspond to the right ground truth image, i.e., the right image R(Ot=i+F° ).
The one or more processors 100 may be configured to compare the front-left image FL(0t1) and/or the rear-left image RL(Ot=i) with the (ground truth) left image L(Ot= i+F° ) and/or the front-right image FR(0t) and/or the rear-right image RR(0t) with the (ground truth) right image R(Ot=i+F° ) to evaluate the calibration of the front camera 102, the rear camera 104, the left-lateral camera 106, and/or the right-lateral camera 108. According to some aspects, the two images which are compared with each other may have a same size. According to other aspects, the two images which are compared with each other may have a different size from one another and the one or more processors 110 may be configured to pre-process the images to have the same size prior to comparing them. In the case that the vehicle 100 is not driving in a straight course, the scene (e.g., first scene or second scene) shown in the respective images on a (left or right) side may vary slightly (e.g., rotate and/or shift). The one or more processors 110 may be configured to rotate and/or shift (e.g., using rigid registration) at least one of the two images prior to comparing them. Optionally, the vehicle 100 may include an input device which allows a user (e.g., driver) to (alternatively or additionally) shift (may also be referred to as translate) and/or rotate the images manually. This may allow the user (e.g., driver) to manually verify the result of the evaluation.
The one or more processors 110 may be configured to compare two images with each using any suitable metric. For example, the one or more processors 110 may be configured to determine a comparison value which represents a difference between the two images. It is understood that a similarity also represents a difference. As example, the comparison value may be or may represent a sum of absolute difference (SAD), a mean squared error (MSE), and/or a structural similarity index (SSIM).
According to various aspects, the vehicle 100 may include or may be coupled to a display device. The one or more processors 110 may be configured to determine a difference image representing the difference between the two images which are compared with each other. The one or more processors 110 may be configured to provide control instruction to the display device to display the difference image (e.g., to a user, such as a driver, of the vehicle 100). Thereby, a visual comparison may be provided to the user (e.g., driver). Optionally, the difference image may include a pseudo coloring to indicate deviations from the ground truth image.
The one or more processors 110 may be configured to determine that two images match with each other in the case that the determined comparison value represents a difference between the first image and the second image equal to or less than a predefined difference threshold value. The one or more processors 110 may be configured to determine, based on one or more comparisons, whether the cameras are calibrated correctly (or not). In the following various exemplary cases are described to illustrate how to evaluate the calibration of the cameras: In the case that the front-left image FL(Ot=i) matches with the left image L(Ot=i÷F° ), the one or more processors 110 may determine that the front camera 102 and the left-lateral camera 106 are calibrated correctly. In the case that the rear-left image RL(Ot=i+FO-FRO) matches with the left image L(Ot=i+F° ), the one or more processors 110 may determine that the rear camera 104 and the left-lateral camera 106 are calibrated correctly.
In the case that the front-right image FR(Ot=i) matches with the right image R(Ot=i+ ), the one or more processors 110 may determine that the front camera 102 and the right-lateral camera 108 are calibrated correctly. In the case that the rear-right image RR(Ot= i+FO+RO) matches with the right image R(Ot=i+FO ) the one or more processors 110 may determine that the rear camera 104 and the right-lateral camera 108 are calibrated correctly. In the case that the front-left image FL(Ot=1) matches with the left image L(Ot=i+F° ) and that the rear-left image RL(Ot=i+FO+RO ) does not match with the left image L(Ot=i+F° ), the one or more processors 110 may determine that the front camera 102 and the left-lateral camera 106 are calibrated correctly and that the rear camera 104 is not calibrated correctly.
In the case that the front-left image FL(Ot=i) does not match with the left image L(Ot=i+F° ) and that the rear-left image RL(Ot=i-FFO+RO ) matches with the left image L(Ot=i+F° ), the one or more processors 110 may determine that the rear camera 104 and the left-lateral camera 106 are calibrated correctly and that the front camera 102 is not calibrated correctly.
In the case that the front-right image FR(Ot=i) matches with the right image R(Ot=i±F° ) and that the rear-right image RR(Ot+FOO = ) does not match with i+R the right image R(Ot=i-FF° ), the one or more processors 110 may determine that the front camera 102 and the right-lateral camera 106 are calibrated correctly and that the rear camera 104 is not calibrated correctly.
In the case that the front-right image FR(0t1) does not match with the right image R(Ot=i+F° ) and that the rear-right image RR(Ot=i+F°+R° ) matches with the right image R(Ot= that the rear camera 104 and the right-lateral camera 106 are calibrated correctly and that the front camera 102 is not calibrated correctly.
In the case that the front-right image FR(Ot=i) matches with the right image R(Ot=i+F° ), that the rear-right image RR(Ot=i+FO+RO ) matches with the right image R(Ot=i+F° ), and that the front-left image FL(0t1) and the rear-left image RL(Ot=i+FO+RO ) do not match with the left image L(Ot=i+F° ), the one or more processors 110 may determine that the front camera 102, the rear camera 104, and the right-lateral camera 108 are calibrated correctly and that the left-lateral camera 106 is not calibrated correctly.
In the case that the front-left image FL(0t) matches with the left image L(Ot=i+F° ), that the rear-left image RL(Ot=i+FO+RO ) matches with the left image L(Ot=i+F° ), and that the front-right image FR(Ot=i) and the rear-right image RR(Ot=i+FO+RO ) do not match with the right image R(Ot=i+FO ) the one or more processors 110 may determine that the front camera 102, the rear camera 104, and the left-lateral camera 106 are calibrated correctly and that the right-lateral camera 108 is not calibrated correctly.
It is understood that the above evaluations are examples and that there are further logic combinations which allow to determine whether a camera is calibrated correctly (or not).
i+F° ), the one or more processors 110 may determine For illustration, FIG.2C shows the front-left image FL(0t1) having a lateral size of Px and a longitudinal size of Py. According to various aspects, the longitudinal size Py may correspond to the number of pixels moved in two consecutive frames as captured by the left-lateral camera 106. At each frame, a respective top-view image Ot may be generated. Thus, in this case, a scene (e.g., the first scene or second scene) which is shown in the front-left image FL(Ot=i) may be shown in a subsequent top-view image Ot+1 directly next to the front-left image FL(0t1) and so on. The same may apply for the front-right image FR(Ot=i). This is exemplarily shown in FIG.2D. As shown, FO may be the minimum number of top-view images, 0, to i+ generated until the scene and the further scene are shown in left image L(OtFO)= and the right image R(Ot=i+F° ), respectively. Analogously, RO may be the minimum number of top-view images, 0, to be generated until the scene and the further scene are shown in the rear region (the rear-left image RL(Ot=i+FO-FRO) and/or the rear-right image RR(Ot=i+FO-FRO ) respectively).
According to various aspects, the herein described images (e.g., the front-left image, the front-right image, the rear-left image, and/or the rear-right image) may be generated using more than one top-view image. For example, a front-left image FL(Ot=i to i+FO.
) may be generated by concatenating all front-left images FL(d) from to +1=, FL(0t) to FL(Ot=i+F°) (i.e., the front-left images FL(Ot=i), FL(Ot), FL(Oti+2) =i FL(Ot= )) i+FO...
A front-right image FR(Ot=i to 1+FO, ) may be generated by concatenating all front-right images FR(0t) from to FR(0t) to FR(Ot=i+Fo) c.
e the front-right t=i+ images FR(Ot=i), FR(Ot=i+1), FR(Ot+2)FR(O =i F°)). Analogously, a rear-left image may be generated by concatenating two or more rear-left images and/or a rear-right image may be generated by concatenating two or more rear-right images.
Correspondingly, the left ground truth image L(Ot=i+ °) may be generated by concatenating all left images L(Ot=i) from to L(0t) to L(Ot=i+Fo+Fo" ) 0.e., the left images L(Ot=i+Fo), ) L(Ot=i+Fo+2), L(Ot=i+Foi-Fon )). Hence, the so-generated left ground truth image may show the same scene as the concatenated front-left image.
According to various aspects, the left ground truth image may be compared to the concatenated front-left image. Analogously, a right ground truth image R(Ot=i+F°) may be generated by concatenating all right images R(Ot=i to) from to R(0t) to R(Ot=i+FO+FO, j (i.e., the right images R(Ot=i+FO), R(ot=i+FO1-1), R(Ot= Rpt=i+FO+FO)).
As described herein, an image (e.g., the front-left image FL(Ot=i), the front-right image, the rear-left image, and/or the rear-right image) may have a lateral size of Px and a longitudinal size of Py. Py may be the vehicle longitudinal displacement equivalent to a number of pixels moved in consecutive frames. According to various aspects, the longitudinal size Py may correspond to the number of pixels moved in two consecutive frames as captured by the respective camera. In the case of concatenating two or more images, the (left and/or right) ground truth image (and optionally the concatenated front-left image, the concatenated front-right image, the concatenated rear-left image, and/or the concatenated rear-right image) may be generated by buffering two or more top view images. The ground truth image may, for example, have a lateral size Xg equal to Px. The ground truth image may, for example, have a longitudinal size Yg. The longitudinal size Yg of the ground through image may correspond to a product of the longitudinal size of Py of the respective image and the number of images which are concatenated. For example, in the case that FO images are concatenated, Yg may correspond to Py*F0. According to some aspects, RO may be equal to FO. In this case, there may be the following conditional relation: (Px == X9) && (FO * Py == Yg) && (RO* Py == Various aspects relate to an evaluation of a calibration. This calibration may be an online calibration (e.g., to adapt for online changes as described herein). As described herein, the ground truth image(s) can be generated using only data generated from the cameras. This allows, for example, to generated ground truth data with reduced costs for devices as well as reduced computing costs. Other approaches, such as using a reprojection error, are not suitable to evaluate an online calibration since using the reprojection error requires geometrical knowledge of the scene (which may not be known and usually cannot be acquired online).
FIG.3 shows a flow diagram of a method 300 for evaluating a calibration of at least one camera according to various aspects. The method 300 may include receiving, from a longitudinal camera (e.g., the front camera 102 or the rear camera 104), a first longitudinal image showing a surrounding in front of the vehicle or behind the vehicle at a first point in time (e.g., at t = i or at t 0 i+FO+RO) (in 302). The method 300 may include receiving, from a lateral camera (e.g., the left-lateral camera 106 or the right-lateral camera 108), a first lateral image showing a surrounding in a lateral io direction next to the vehicle at the first point in time (in 304). The method 300 may include receiving, from the longitudinal camera, a second longitudinal image showing the surrounding in front of or behind the vehicle at a second point in time (e.g., at t = i+FO) different from the first point in time (in 306). The method 300 may include receiving, from the lateral camera, a second lateral image showing the surrounding in the lateral direction next to the vehicle at the second point in time (in 308). For example, a portion of the second lateral image shows a scene which is not shown in the second longitudinal image and which is, at the first point in time, shown in the first longitudinal image and the first lateral image. The method 300 may include generating, using the first longitudinal image and the first lateral image, a first image (e.g., a front-left image FL(Ot=1), a rear-left image RL(Ot=i+FO-FRO), a front-right image FR(Ot=i), or a rear-right image RR(Ot=i+FO+RO)) showing a top view on the scene at the first point in time On 310). The method 300 may include generating, using the second lateral image, a second image (e.g., a left image 140t=i+FD) or a right image R(Ot=i+F°)) showing a top view on the scene at the second point in time (in 312).
The method 300 may include determining, whether the longitudinal camera and the lateral camera are calibrated correctly by comparing the first image with the second image (in 314). Capturing an image, generating a top-view image, generating an image from the top-view image, comparing two images with each other, etc. may be carried out as described herein with reference to FIG.1A to FIG.2D.
In the following, various aspects of this disclosure will be illustrated. It is noted that aspects described with reference to control device or a vehicle may be accordingly implemented in a method, and vice versa.
Example 1 is a vehicle including: a longitudinal camera configured to capture a longitudinal image showing a surrounding in front of the vehicle or behind the vehicle; a lateral camera configured to capture a lateral image showing a surrounding in a lateral direction next to the vehicle, wherein the longitudinal camera and the lateral camera have a partially overlapping field-of-view such that a portion of the longitudinal image and a portion of the lateral image show a same region in io the surrounding of the vehicle; one or more processors configured to: receive, from the longitudinal camera, a first longitudinal image showing the surrounding in front of or behind the vehicle at a first point in time; receive, from the lateral camera, a first lateral image showing the surrounding in the lateral direction next to the vehicle at the first point in time; receive, from the lateral camera, a second lateral image showing the surrounding in the lateral direction next to the vehicle at a second point in time different from the first point in time, wherein a portion of the second lateral image shows a scene which is, at the first point in time, shown in the first longitudinal image and the first lateral image; generate, using the first longitudinal image and the first lateral image, a first image showing a top view on the scene at the first point in time; generate, using the second lateral image, a second image showing a top view on the scene at the second point in time; and determine, whether the longitudinal camera and the lateral camera are calibrated correctly by comparing the first image with the second image.
This allows to generate test images (as test data) as well as ground truth images (as ground truth data) during driving (i.e., online) enabling an evaluation of the calibration of the sensors. For example, this allows to determine, whether a sensor is calibrated correctly or whether the sensor should be re-calibrated.
In Example 2, the subject matter of Example 1 can optionally include that the one or more processors are configured to: determine, by comparing the first image with the second image, a comparison value representing a difference between the first image and the second image; determine, whether the comparison value represents a difference between the first image and the second image equal to or less than a predefined difference threshold value; and in the case that it is determined that the comparison value represents a difference between the first image and the second image equal to or less than the predefined difference threshold value, determine that the longitudinal camera and the lateral camera are calibrated correctly.
This example details how the first image and the second image may be compared with each other.
In Example 3, the subject matter of Example 1 or 2 can optionally include that the one or more processors are configured to: generate, using the first longitudinal image and the first lateral image a first top-view image showing a top view of at least io the surrounding in the lateral direction next to the vehicle and the surrounding in front or behind the vehicle at the first point in time, and generate the first image by extracting a portion of the first top-view image; and/or generate, using the second lateral image, a second top-view image showing a top view of at least the surrounding in the lateral direction next to the vehicle at the first point in time, and generate the second image by extracting a portion of the second top-view image.
This allows to (e.g., continuously) generate test images (as test data) as well as ground truth images (as ground truth data) from top-view images.
In Example 4, the subject matter of any one of Examples 1 to 3 can optionally include that the first image and the second image have a same image size.
This facilitates the comparison between the first image and the second image.
In Example 5, the subject matter of any one of Examples 1 to 4 can optionally include that the longitudinal camera is configured to capture consecutive longitudinal images with a predefined frame rate; wherein the first image and/or the second image have a longitudinal image size in a longitudinal direction perpendicular to the lateral direction substantially equal to a number of pixels moved in two consecutive longitudinal images.
This allows that test data images showing a respective scene can be generated successively without having any overlap between two consecutive images In Example 6, the subject matter of any one of Examples 1 to 5 can optionally include that the one or more processors are configured to: receive, from the longitudinal camera, a second longitudinal image showing the surrounding in front of or behind the vehicle at the second point in time, wherein the second longitudinal image does not show the scene.
In Example 7, the subject matter of any one of Examples 1 to 6 can optionally include that the longitudinal camera is a front camera configured to capture a front image as the longitudinal image showing the surrounding in front of the vehicle; wherein the vehicle further includes a rear camera configured to capture a rear image showing a surrounding behind the vehicle, wherein the rear camera and the lateral camera have a partially overlapping field-of-view such that a portion of the rear image and a further portion of the lateral image show a same region in the io surrounding of the vehicle (wherein the further portion of the lateral image is different from the portion of the lateral image); wherein the one or more processors are configured to: receive, from the rear camera, a first rear image showing the surrounding behind the vehicle at a third point in time, wherein the second point in time is temporally between the first point in time and the second point in time; receive, from the lateral camera, a third lateral image showing the surrounding in the lateral direction next to the vehicle at the third point in time, wherein the scene is, at the third point in time, shown in the first rear image and the third lateral image; generate, using the rear image and the third lateral image, a third image showing a top view on the scene at the third point in time; determine, whether the rear camera and the lateral camera are calibrated correctly by comparing the third image with the second image.
This allows to generate a further test image (as test data) associated with a same ground truth image as the previous test image. Hence, each ground truth image may be the ground truth for two different test images (one associated with a front region of the vehicle and the other associated with a rear region of the vehicle), thereby increasing the efficiency of data generation. This further allows to determine, whether the camera which generates the ground truth image is wrongly calibrated. In Example 8, the subject matter of Example 7 can optionally include that the one or more processors are configured to: determine, by comparing the third image with the second image, a further comparison value representing a difference between the third image and the second image; and determine, whether the further comparison value represents a difference between the third image and the second image equal to or less than a further predefined difference threshold value; and in the case that it is determined that the further comparison value represents a difference between the third image and the second image equal to or less than the further predefined difference threshold value, determine that the rear camera and the lateral camera are calibrated correctly.
This example details how the third image and the second image may be compared with each other.
In Example 9, the subject matter of Examples 2 and 8 can optionally include that the one or more processors are configured to: in the case that it is determined that the to comparison value represents a difference between the first image and the second image greater than the predefined difference threshold value and that it is determined that the further comparison value represents a difference between the third image and the second image equal to or less than the further predefined difference threshold value, determine that the front camera is not calibrated correctly; and/or in the case that it is determined that the comparison value represents a difference between the first image and the second image equal to or less than the predefined difference threshold value and that it is determined that the further comparison value represents a difference between the third image and the second image greater than the further predefined difference threshold value, determine that the rear camera is not calibrated correctly.
This details examples of how to determine which sensor is calibrated correctly and which not.
In Example 10, the subject matter of any one of Examples 1 to 9, provided that in combination with Example 7 can optionally further include that the lateral camera is a left-lateral camera configured to capture a left-lateral image as the lateral image showing the surrounding left to the vehicle; wherein the vehicle further includes a right-lateral camera configured to capture a right-lateral image showing a surrounding right to the vehicle, wherein the right-lateral camera and the front camera have a partially overlapping field-of-view such that a further portion of the front image and a first portion of the right-lateral image show a same region in the surrounding of the vehicle, and wherein the right-lateral camera and the rear camera have a partially overlapping field-of-view such that a further portion of the rear image and a second portion of the right-lateral image show a same region in the surrounding of the vehicle; wherein the one or more processors are configured to: receive, from the right-lateral camera: a first right-lateral image showing the surrounding right the vehicle at the first point in time, a second right-lateral image showing the surrounding right the vehicle at the second point in time, and a third right-lateral image showing the surrounding right the vehicle at the third point in time, wherein a portion of the second right-lateral image shows a further scene which is, at the first point in time, shown in the first longitudinal image and the first right-lateral image and which is, at the third point in time, shown in the first rear image and the io third right-lateral image; and generate, using the first longitudinal image and the first right-lateral image, a fourth image showing a top view on the further scene at the first point in time; generate, using the second right-lateral image, a fifth image showing a top view on the further scene at the second point in time; generate, using the first rear image and the third right-lateral image, a sixth image showing a top view on the further scene at the third point in time; and determine, whether the front camera, the rear camera, and/or the right-lateral camera are calibrated correctly by comparing the fourth image with the fifth image and comparing the sixth image with the fifth image.
This allows to generate test images and ground truth images steadily on both sides of the vehicle, left and right, thereby further increasing the efficiency of generating test data and ground truth data.
In Example 11, the subject matter of Examples 3 and 10 can optionally include that the one or more processors are configured to: generate the first top-view image using the first longitudinal image, the first rear image, the first lateral image, and the first right-lateral image such that the first top-view image shows a top view of the surrounding completely around the vehicle.
This further increases the efficiency of data generation since the test images (left and right) can be generated from a same top-view image and also the ground truth images (left and right) can be generated from a same top-view image.
In Example 12, the subject matter of Example 10 or 11 in combination with Example 8 or 9 can optionally include that the one or more processors are configured to: determine, by comparing the fourth image with the fifth image, a first comparison value representing a difference between the fourth image and the fifth image; determine, whether the first comparison value represents a difference between the fourth image and the fifth image equal to or less than a first predefined difference threshold value; determine, by comparing the sixth image with the fifth image, a second comparison value representing a difference between the sixth image and the fifth image; determine, whether the second comparison value represents a difference between the sixth image and the fifth image equal to or less than a second predefined difference threshold value; and in the case that it is determined that: the comparison value represents a difference between the first image and the second image equal to or less than the predefined difference threshold value, the further comparison value represents a difference between the third image and the second image equal to or less than the further predefined difference threshold value, determine that the front camera is not calibrated correctly, and the first comparison value represents a difference between the fourth image and the fifth image greater than the first predefined difference threshold value and that the second comparison value represents a difference between the sixth image and the fifth image greater than the second predefined difference threshold value, determine that the right-lateral camera is not calibrated correctly (and optionally further that the rear camera, the front camera, and the left-lateral camera are calibrated correctly).
This example details how to determine, whether the right-lateral camera which generates the ground truth image is wrongly calibrated.
In Example 13, the subject matter of any one of Examples 1 to 12 can optionally include that the one or more processors are configured to, when comparing two images with each other, generate a difference image representing a difference between the two images, and to provide control instruction to control a display device to display the difference image.
This allows to inform a driver (or co-driver) about the sensor calibration.
In Example 14, the subject matter of any one of Examples 1 to 13 can optionally include that the one or more processors are configured to, prior to comparing two images with each other, rotate and/or shift at least one of the two images (e.g., via rigid registration).
This allows to consider the use-case in which the vehicle does not move in a straight line but may steer to the left or right.
In Example 15, the subject matter of any one of Examples 1 to 14 can optionally include that the one or more processors are configured to, in the case that it is determined that a camera (e.g., the front camera, the rear camera, the left-lateral camera and/or the right-lateral camera) is not calibrated correctly, provide control instructions to control a re-calibration of the camera.
Example 16 is a method for evaluating a calibration of one or more cameras mounted to a vehicle, the method including: receiving, from a longitudinal camera, a 1() first longitudinal image showing a surrounding in front of the vehicle or behind the vehicle at a first point in time; receiving, from a lateral camera, a first lateral image showing a surrounding in a lateral direction next to the vehicle at the first point in time; receiving, from the longitudinal camera, a second longitudinal image showing the surrounding in front of or behind the vehicle at a second point in time different from the first point in time; receiving, from the lateral camera, a second lateral image showing the surrounding in the lateral direction next to the vehicle at the second point in time, wherein a portion of the second lateral image shows a scene which is not shown in the second longitudinal image and which is, at the first point in time, shown in the first longitudinal image and the first lateral image; generating, using the first longitudinal image and the first lateral image, a first image showing a top view on the scene at the first point in time; generating, using the second lateral image, a second image showing a top view on the scene at the second point in time; and determining, whether the longitudinal camera and the lateral camera are calibrated correctly by comparing the first image with the second image.
In Example 17, the method of Example 16 can optionally further include: determining, by comparing the first image with the second image, a comparison value representing a difference between the first image and the second image; determining, whether the comparison value represents a difference between the first image and the second image equal to or less than a predefined difference threshold value; and in the case that it is determined that the comparison value represents a difference between the first image and the second image equal to or less than the predefined difference threshold value, determine that the longitudinal camera and the lateral camera are calibrated correctly.
In Example 18, the method of Example 16 or 17 can optionally further include: generating, using the first longitudinal image and the first lateral image a first top-view image showing a top view of at least the surrounding in the lateral direction next to the vehicle and the surrounding in front or behind the vehicle at the first point in time, and generate the first image by extracting a portion of the first top-view image; and/or generating, using the second lateral image, a second top-view image showing a top view of at least the surrounding in the lateral direction next to the vehicle at the lo first point in time, and generate the second image by extracting a portion of the second top-view image.
In Example 19, the subject matter of any one of Examples 16 to 18 can optionally include that the first image and the second image have a same image size.
In Example 20, the subject matter of any one of Examples 16 to 19 can optionally include that the longitudinal camera is configured to capture consecutive longitudinal images with a predefined frame rate; wherein the first image and/or the second image have a longitudinal image size in a longitudinal direction perpendicular to the lateral direction substantially equal to a number of pixels moved in two consecutive longitudinal images.
In Example 21, the subject matter of any one of Examples 16 to 20 can optionally include that the longitudinal camera is a front camera; wherein the method further includes: receiving, from a rear camera, a first rear image showing a surrounding behind the vehicle at a third point in time, wherein the second point in time is temporally between the first point in time and the second point in time; receiving, from the lateral camera, a third lateral image showing the surrounding in the lateral direction next to the vehicle at the third point in time, wherein the scene is, at the third point in time, shown in the first rear image and the third lateral image; generating, using the rear image and the third lateral image, a third image showing a top view on the scene at the third point in time; determining, whether the rear camera and the lateral camera are calibrated correctly by comparing the third image with the second image.
In Example 22, the method of Example 21 can optionally further include: receiving, from the rear camera, a second rear image showing the surrounding behind the vehicle at the second point in time, wherein the second rear image does not show the scene.
In Example 23, the method of Example 21 or 22 can optionally further include: determining, by comparing the third image with the second image, a further comparison value representing a difference between the third image and the second image; and determining, whether the further comparison value represents a difference between the third image and the second image equal to or less than a to further predefined difference threshold value; and in the case that it is determined that the further comparison value represents a difference between the third image and the second image equal to or less than the further predefined difference threshold value, determining that the rear camera and the lateral camera are calibrated correctly.
In Example 24, the method of Examples 17 and 23 can optionally further include: in the case that it is determined that the comparison value represents a difference between the first image and the second image greater than the predefined difference threshold value and that it is determined that the further comparison value represents a difference between the third image and the second image equal to or less than the further predefined difference threshold value, determining that the front camera is not calibrated correctly; and/or in the case that it is determined that the comparison value represents a difference between the first image and the second image equal to or less than the predefined difference threshold value and that it is determined that the further comparison value represents a difference between the third image and the second image greater than the further predefined difference threshold value, determining that the rear camera is not calibrated correctly.
In Example 25, the subject matter of any one of Examples 16 to 24, provided that in combination with Example 21, can optionally further include that the lateral camera is a left-lateral camera for capturing left-lateral images showing the surrounding left to the vehicle and that the method further includes: receiving, from a right-lateral camera: a first right-lateral image showing a surrounding right the vehicle at the first point in time, a second right-lateral image showing the surrounding right the vehicle at the second point in time, and a third right-lateral image showing the surrounding right the vehicle at the third point in time, wherein a portion of the second right-lateral image shows a further scene which is shown in the first longitudinal image, the first right-lateral image, the first rear image, and the third right-lateral image, and which is not shown in the second longitudinal image (and which is, provided that in combination with Example 22, not shown in the second rear image); and generating, using the first longitudinal image and the first right-lateral image, a fourth image showing a top view on the further scene at the first point in time; generating, using the second right-lateral image, a fifth image showing a top view on the further scene io at the second point in time; generating, using the first rear image and the third right-lateral image, a sixth image showing a top view on the further scene at the third point in time; and determining, whether the front camera, the rear camera, and/or the right-lateral camera are calibrated correctly by comparing the fourth image with the fifth image and comparing the sixth image with the fifth image.
In Example 26, the method of Examples 18 and 25 can optionally further include: generating the first top-view image using the first longitudinal image, the first rear image, the first lateral image, and the first right-lateral image such that the first top-view image shows a top view of the surrounding completely around the vehicle. In Example 27, the method of Example 25 or 26 in combination with Example 23 or 24 can optionally further include: determining, by comparing the fourth image with the fifth image, a first comparison value representing a difference between the fourth image and the fifth image; determining, whether the first comparison value represents a difference between the fourth image and the fifth image equal to or less than a first predefined difference threshold value; determining, by comparing the sixth image with the fifth image, a second comparison value representing a difference between the sixth image and the fifth image; determining, whether the second comparison value represents a difference between the sixth image and the fifth image equal to or less than a second predefined difference threshold value; and in the case that it is determined that: the comparison value represents a difference between the first image and the second image equal to or less than the predefined difference threshold value, the further comparison value represents a difference between the third image and the second image equal to or less than the further predefined difference threshold value, determine that the front camera is not calibrated correctly, and the first comparison value represents a difference between the fourth image and the fifth image greater than the first predefined difference threshold value and that the second comparison value represents a difference between the sixth image and the fifth image greater than the second predefined difference threshold value, determining that the right-lateral camera is not calibrated correctly (and optionally further that the rear camera, the front camera, and the left-lateral camera are calibrated correctly).
In Example 28, the method of any one of Examples 16 to 27 can optionally further io include: when comparing two images with each other, generating a difference image representing a difference between the two images, and providing control instruction to control a display device to display the difference image.
In Example 29, the method of any one of Examples 1 to 28 can optionally further include: prior to comparing two images with each other, rotating and/or shifting at least one of the two images (e.g., via rigid registration).
In Example 30, the method of any one of Examples 16 to 29 can optionally further include: in the case that it is determined that a camera (e.g., the front camera, the rear camera, the left-lateral camera, and/or the right-lateral camera) is not calibrated correctly, providing control instructions to control a re-calibration of the camera.
Example 31 is a non-transitory computer-readable medium having instructions recorded thereon which, when executed by a processor of a vehicle, cause the processor to carry out the method according to any one of Examples 16 to 30. Example 32 is a control device for controlling a vehicle including: one or more processors configured to: receive, from a longitudinal camera of a vehicle, a first longitudinal image showing a surrounding in front of the vehicle or behind the vehicle at a first point in time; receive, from a lateral camera of the vehicle, a first lateral image showing a surrounding in a lateral direction next to the vehicle at the first point in time; receive, from the lateral camera, a second lateral image showing the surrounding in the lateral direction next to the vehicle at a second point in time different from the first point in time, wherein a portion of the second lateral image shows a scene which is, at the first point in time, shown in the first longitudinal image and the first lateral image; generate, using the first longitudinal image and the first lateral image, a first image showing a top view on the scene at the first point in time; generate, using the second lateral image, a second image showing a top view on the scene at the second point in time; and determine, whether the longitudinal camera and the lateral camera are calibrated correctly by comparing the first image with the second image.
In Example 33, the subject matter of Example 32 can optionally include that the one or more processors are configured to: determine, by comparing the first image with the second image, a comparison value representing a difference between the first image and the second image; determine, whether the comparison value represents a io difference between the first image and the second image equal to or less than a predefined difference threshold value; and in the case that it is determined that the comparison value represents a difference between the first image and the second image equal to or less than the predefined difference threshold value, determine that the longitudinal camera and the lateral camera are calibrated correctly.
In Example 34, the subject matter of Example 32 or 33 can optionally include that the one or more processors are configured to: generate, using the first longitudinal image and the first lateral image a first top-view image showing a top view of at least the surrounding in the lateral direction next to the vehicle and the surrounding in front or behind the vehicle at the first point in time, and generate the first image by extracting a portion of the first top-view image; and/or generate, using the second lateral image, a second top-view image showing a top view of at least the surrounding in the lateral direction next to the vehicle at the first point in time, and generate the second image by extracting a portion of the second top-view image. In Example 35, the subject matter of any one of Examples 32 to 34 can optionally include that the first image and the second image have a same image size.
In Example 36, the subject matter of any one of Examples 32 to 35 can optionally include that the longitudinal camera is configured to capture consecutive longitudinal images with a predefined frame rate; wherein the first image and/or the second image have a longitudinal image size in a longitudinal direction perpendicular to the lateral direction substantially equal to a number of pixels moved in two consecutive longitudinal images.
In Example 37, the subject matter of any one of Examples 32 to 36 can optionally include that the one or more processors are configured to: receive, from the longitudinal camera, a second longitudinal image showing the surrounding in front of or behind the vehicle at the second point in time, wherein the second longitudinal image does not show the scene.
In Example 38, the subject matter of any one of Examples 32 to 37 can optionally include that the longitudinal camera is a front camera configured to capture a front image as the longitudinal image showing the surrounding in front of the vehicle; wherein the one or more processors are configured to: receive, from a rear camera io of the vehicle, a first rear image showing a surrounding behind the vehicle at a third point in time, wherein the second point in time is temporally between the first point in time and the second point in time; receive, from the lateral camera, a third lateral image showing the surrounding in the lateral direction next to the vehicle at the third point in time, wherein the scene is, at the third point in time, shown in the first rear image and the third lateral image; generate, using the rear image and the third lateral image, a third image showing a top view on the scene at the third point in time; determine, whether the rear camera and the lateral camera are calibrated correctly by comparing the third image with the second image.
In Example 39, the subject matter of Example 38 can optionally include that the one or more processors are configured to: determine, by comparing the third image with the second image, a further comparison value representing a difference between the third image and the second image; and determine, whether the further comparison value represents a difference between the third image and the second image equal to or less than a further predefined difference threshold value; and in the case that it is determined that the further comparison value represents a difference between the third image and the second image equal to or less than the further predefined difference threshold value, determine that the rear camera and the lateral camera are calibrated correctly.
In Example 40, the subject matter of Examples 33 and 39 can optionally include that the one or more processors are configured to: in the case that it is determined that the comparison value represents a difference between the first image and the second image greater than the predefined difference threshold value and that it is determined that the further comparison value represents a difference between the third image and the second image equal to or less than the further predefined difference threshold value, determine that the front camera is not calibrated correctly; and/or in the case that it is determined that the comparison value represents a difference between the first image and the second image equal to or less than the predefined difference threshold value and that it is determined that the further comparison value represents a difference between the third image and the second image greater than the further predefined difference threshold value, determine that the rear camera is not calibrated correctly.
io In Example 41, the subject matter of any one of Examples 32 to 40, provided that in combination with Example 38, can optionally include that the lateral camera is a left-lateral camera configured to capture a left-lateral image as the lateral image showing the surrounding left to the vehicle; and that the one or more processors are configured to: receive, from the right-lateral camera: a first right-lateral image showing the surrounding right the vehicle at the first point in time, a second right-lateral image showing the surrounding right the vehicle at the second point in time, and a third right-lateral image showing the surrounding right the vehicle at the third point in time, wherein a portion of the second right-lateral image shows a further scene which is, at the first point in time, shown in the first longitudinal image and the first right-lateral image and which is, at the third point in time, shown in the first rear image and the third right-lateral image; and generate, using the first longitudinal image and the first right-lateral image, a fourth image showing a top view on the further scene at the first point in time; generate, using the second right-lateral image, a fifth image showing a top view on the further scene at the second point in time; generate, using the first rear image and the third right-lateral image, a sixth image showing a top view on the further scene at the third point in time; and determine, whether the front camera, the rear camera, and/or the right-lateral camera are calibrated correctly by comparing the fourth image with the fifth image and comparing the sixth image with the fifth image.
In Example 42, the subject matter of Examples 34 and 41 can optionally include that the one or more processors are configured to: generate the first top-view image using the first longitudinal image, the first rear image, the first lateral image, and the first right-lateral image such that the first top-view image shows a top view of the surrounding completely around the vehicle.
In Example 43, the subject matter of Example 41 01 42 in combination with Example 39 or 40 can optionally further include that the one or more processors are configured to: determine, by comparing the fourth image with the fifth image, a first comparison value representing a difference between the fourth image and the fifth image; determine, whether the first comparison value represents a difference between the fourth image and the fifth image equal to or less than a first predefined difference threshold value; determine, by comparing the sixth image with the fifth to image, a second comparison value representing a difference between the sixth image and the fifth image; determine, whether the second comparison value represents a difference between the sixth image and the fifth image equal to or less than a second predefined difference threshold value; and in the case that it is determined that: the comparison value represents a difference between the first image and the second image equal to or less than the predefined difference threshold value, the further comparison value represents a difference between the third image and the second image equal to or less than the further predefined difference threshold value, determine that the front camera is not calibrated correctly, and the first comparison value represents a difference between the fourth image and the fifth image greater than the first predefined difference threshold value and that the second comparison value represents a difference between the sixth image and the fifth image greater than the second predefined difference threshold value, determine that the right-lateral camera is not calibrated correctly (and optionally further that the rear camera, the front camera, and the left-lateral camera are calibrated correctly).
In Example 44, the subject matter of any one of Examples 32 to 43 can optionally include that the one or more processors are configured to, when comparing two images with each other, generate a difference image representing a difference between the two images, and to provide control instruction to control a display device to display the difference image.
In Example 45, the subject matter of any one of Examples 32 to 44 can optionally include that the one or more processors are configured to, prior to comparing two images with each other, rotate and/or shift at least one of the two images (e.g., via rigid registration).
In Example 46, the subject matter of any one of Examples 32 to 45 can optionally include that the one or more processors are configured to, in the case that it is determined that a camera (e.g., the front camera, the rear camera, the left-lateral camera and/or the right-lateral camera) is not calibrated correctly, provide control instructions to control a re-calibration of the camera.
REFERENCE SIGNS
12: Front direction 14: Rear direction 16: Left-lateral direction 18: Right-lateral direction 20: Top-view direction ID 100: Vehicle 102: Front camera 104: Rear camera 106: Left-lateral camera 108: Right-lateral camera 110: One or more processors 112: Front portion 114: Rear portion 116: Left-lateral portion 118: Right-lateral portion 120: Front-left portion 122: Front-right portion 124: Rear-left portion 126: Rear-right portion 200: Region 300: Method 302-314: Method features

Claims (15)

  1. CLAIMSA control device for controlling a vehicle, comprising: * one or more processors (110) configured to: o receive, from a longitudinal camera (102, 104) of a vehicle (100), a first longitudinal image showing a surrounding in front of the vehicle (100) or behind the vehicle (100) at a first point in time; o receive, from a lateral camera (106, 108) of the vehicle (100), a first lateral image showing a surrounding in a lateral direction (16, 18) next to the vehicle (100) at the first point in time; o receive, from the lateral camera (106, 108), a second lateral image showing the surrounding in the lateral direction (16, 18) next to the vehicle (100) at a second point in time different from the first point in time, wherein a portion of the second lateral image shows a scene which is, at the first point in time, shown in the first longitudinal image and the first lateral image; o generate, using the first longitudinal image and the first lateral image, a first image showing a top view on the scene at the first point in time; o generate, using the second lateral image, a second image showing a top view on the scene at the second point in time; and o determine, whether the longitudinal camera (102, 104) and the lateral camera (106, 108) are calibrated correctly by comparing the first image with the second image.
  2. 2. The control device according to claim 1, wherein the one or more processors (110) are configured to: * determine, by comparing the first image with the second image, a comparison value representing a difference between the first image and the second image; * determine, whether the comparison value represents a difference between the first image and the second image equal to or less than a predefined difference threshold value; and * in the case that it is determined that the comparison value represents a difference between the first image and the second image equal to or less than the predefined difference threshold value, determine that the longitudinal camera (102, 104) and the lateral camera (106, 108) are calibrated correctly.
  3. 3. The control device according to claim 1 or 2, wherein the one or more processors (110) are configured to: * generate, using the first longitudinal image and the first lateral image a first top-view image showing a top view of at least the surrounding in the lateral direction (16, 18) next to the vehicle (100) and the surrounding in front or behind the vehicle (100) at the first point in time, and generate the first image by extracting a portion of the first top-view image; and/or * generate, using the second lateral image, a second top-view image showing a top view of at least the surrounding in the lateral direction (16, 18) next to the vehicle (100) at the first point in time, and generate the second image by extracting a portion of the second top-view image.
  4. 4. The control device according to any one of claims 1 to 3, wherein the first image and the second image have a same image size.
  5. 5. The control device according to any one of claims 1 to 4, * wherein the longitudinal camera (102, 104) is configured to capture consecutive longitudinal images with a predefined frame rate; * wherein the first image and/or the second image have a longitudinal image size in a longitudinal direction perpendicular to the lateral direction (16, 18) substantially equal to a number of pixels moved in two consecutive longitudinal images.
  6. 6. The control device according to any one of claims 1 to 5, * wherein the longitudinal camera (102, 104) is a front camera configured to capture a front image as the longitudinal image showing the surrounding in front of the vehicle (100); * wherein the one or more processors (110) are configured to: o receive, from a rear camera (104) of the vehicle (100), a first rear image showing a surrounding behind the vehicle (100) at a third point in time, wherein the second point in time is temporally between the first point in time and the second point in time; o receive, from the lateral camera (106, 108), a third lateral image showing the surrounding in the lateral direction (16, 18) next to the vehicle (100) at the third point in time, wherein the scene is, at the third point in time, shown in the first rear image and the third lateral image; o generate, using the rear image and the third lateral image, a third image showing a top view on the scene at the third point in time; o determine, whether the rear camera (104) and the lateral camera (106, 108) are calibrated correctly by comparing the third image with the second image.
  7. 7. The control device according to claim 6, wherein the one or more processors (110) are configured to: * determine, by comparing the third image with the second image, a further comparison value representing a difference between the third image and the second image; and * determine, whether the further comparison value represents a difference between the third image and the second image equal to or less than a further predefined difference threshold value; and * in the case that it is determined that the further comparison value represents a difference between the third image and the second image equal to or less than the further predefined difference threshold value, determine that the rear camera (104) and the lateral camera (106, 108) are calibrated correctly.
  8. The control device according to claims 2 and 7, wherein the one or more processors (110) are configured to: * in the case that it is determined that the comparison value represents a difference between the first image and the second image greater than the predefined difference threshold value and that it is determined that the further comparison value represents a difference between the third image and the second image equal to or less than the further predefined difference threshold value, determine that the front camera is not calibrated correctly; and/or * in the case that it is determined that the comparison value represents a difference between the first image and the second image equal to or less than the predefined difference threshold value and that it is determined that the further comparison value represents a difference between the third image and the second image greater than the further predefined difference threshold value, determine that the rear camera (104) is not calibrated correctly.
  9. 9. The control device according to any one of claims 1 to 8, provided that in combination with claim 6, * wherein the lateral camera (106, 108) is a left-lateral camera (106) configured to capture a left-lateral image as the lateral image showing the surrounding left to the vehicle (100); * wherein the one or more processors (110) are configured to: o receive, from the right-lateral camera (108): * a first right-lateral image showing the surrounding right the vehicle (100) at the first point in time, * a second right-lateral image showing the surrounding right the vehicle (100) at the second point in time, and * a third right-lateral image showing the surrounding right the vehicle (100) at the third point in time, * wherein a portion of the second right-lateral image shows a further scene which is, at the first point in time, shown in the first longitudinal image and the first right-lateral image and which is, at the third point in time, shown in the first rear image and the third right-lateral image; and o generate, using the first longitudinal image and the first right-lateral image, a fourth image showing a top view on the further scene at the first point in time; o generate, using the second right-lateral image, a fifth image showing a top view on the further scene at the second point in time; o generate, using the first rear image and the third right-lateral image, a sixth image showing a top view on the further scene at the third point in time; and o determine, whether the front camera, the rear camera (104), and/or the right-lateral camera (108) are calibrated correctly by comparing the fourth image with the fifth image and comparing the sixth image with the fifth image.
  10. 10. The control device according to claims 3 and 9, wherein the one or more processors (110) are configured to: * generate the first top-view image using the first longitudinal image, the first rear image, the first lateral image, and the first right-lateral image such that the first top-view image shows a top view of the surrounding completely around the vehicle (100).
  11. 11. The control device according to claim 9 or 10 in combination with claim 7 or 8, wherein the one or more processors (110) are configured to: * determine, by comparing the fourth image with the fifth image, a first comparison value representing a difference between the fourth image and the fifth image; * determine, whether the first comparison value represents a difference between the fourth image and the fifth image equal to or less than a first predefined difference threshold value; * determine, by comparing the sixth image with the fifth image, a second comparison value representing a difference between the sixth image and the fifth image; * determine, whether the second comparison value represents a difference between the sixth image and the fifth image equal to or less than a second predefined difference threshold value; and * in the case that it is determined that: o the comparison value represents a difference between the first image and the second image equal to or less than the predefined difference threshold value, o the further comparison value represents a difference between the third image and the second image equal to or less than the further predefined difference threshold value, determine that the front camera is not calibrated correctly, and o the first comparison value represents a difference between the fourth image and the fifth image greater than the first predefined difference threshold value and that the second comparison value represents a difference between the sixth image and the fifth image greater than the second predefined difference threshold value, determine that the right-lateral camera (108) is not calibrated correctly.
  12. 12. The control device according to any one of claims 1 to 11, wherein the one or more processors (110) are configured to, when comparing two images with each other, generate a difference image representing a difference between the two images, and to provide control instruction to control a display device to display the difference image.
  13. 13. The control device according to any one of claims 1 to 12, wherein the one or more processors (110) are configured to, prior to comparing two images with each other, rotate and/or shift at least one of the two images.
  14. 14. A vehicle (100) comprising the control device according to any one of claims 1 io to 13
  15. 15. A method (300) for evaluating a calibration of one or more cameras mounted to a vehicle, the method comprising: * receiving, from a longitudinal camera, a first longitudinal image showing a surrounding in front of the vehicle or behind the vehicle at a first point in time (302); * receiving, from a lateral camera, a first lateral image showing a surrounding in a lateral direction next to the vehicle at the first point in time (304); * receiving, from the longitudinal camera, a second longitudinal image showing the surrounding in front of or behind the vehicle at a second point in time different from the first point in time (306); * receiving, from the lateral camera, a second lateral image showing the surrounding in the lateral direction next to the vehicle at the second point in time, wherein a portion of the second lateral image shows a scene which is not shown in the second longitudinal image and which is, at the first point in time, shown in the first longitudinal image and the first lateral image (308); * generating, using the first longitudinal image and the first lateral image, a first image showing a top view on the scene at the first point in time (310); * generating, using the second lateral image, a second image showing a top view on the scene at the second point in time (312); and * determine, whether the longitudinal camera and the lateral camera are calibrated correctly by comparing the first image with the second image S (314).
GB2218436.0A 2022-12-08 2022-12-08 Vehicle, control device, and method for evaluating a calibration of one or more cameras mounted to a vehicle Pending GB2625262A (en)

Priority Applications (2)

Application Number Priority Date Filing Date Title
GB2218436.0A GB2625262A (en) 2022-12-08 2022-12-08 Vehicle, control device, and method for evaluating a calibration of one or more cameras mounted to a vehicle
PCT/EP2023/082644 WO2024120823A1 (en) 2022-12-08 2023-11-22 Vehicle, control device, and method for evaluating a calibration of one or more cameras mounted to a vehicle

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
GB2218436.0A GB2625262A (en) 2022-12-08 2022-12-08 Vehicle, control device, and method for evaluating a calibration of one or more cameras mounted to a vehicle

Publications (2)

Publication Number Publication Date
GB202218436D0 GB202218436D0 (en) 2023-01-25
GB2625262A true GB2625262A (en) 2024-06-19

Family

ID=84974708

Family Applications (1)

Application Number Title Priority Date Filing Date
GB2218436.0A Pending GB2625262A (en) 2022-12-08 2022-12-08 Vehicle, control device, and method for evaluating a calibration of one or more cameras mounted to a vehicle

Country Status (2)

Country Link
GB (1) GB2625262A (en)
WO (1) WO2024120823A1 (en)

Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20160234436A1 (en) * 2015-02-06 2016-08-11 Delphi Technologies, Inc. Birds-Eye-View Monitoring System With Auto Alignment

Family Cites Families (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US10179543B2 (en) * 2013-02-27 2019-01-15 Magna Electronics Inc. Multi-camera dynamic top view vision system
JP6458439B2 (en) * 2014-10-09 2019-01-30 株式会社デンソー On-vehicle camera calibration device, image generation device, on-vehicle camera calibration method, and image generation method
JP6536529B2 (en) * 2016-10-17 2019-07-03 株式会社デンソー Calibration apparatus for in-vehicle camera and calibration method for in-vehicle camera

Patent Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20160234436A1 (en) * 2015-02-06 2016-08-11 Delphi Technologies, Inc. Birds-Eye-View Monitoring System With Auto Alignment

Also Published As

Publication number Publication date
WO2024120823A1 (en) 2024-06-13
GB202218436D0 (en) 2023-01-25

Similar Documents

Publication Publication Date Title
US11447070B2 (en) Method for determining misalignment of a vehicular camera
US11657604B2 (en) Systems and methods for estimating future paths
US11836989B2 (en) Vehicular vision system that determines distance to an object
US10504241B2 (en) Vehicle camera calibration system
US20200393845A1 (en) Image fusion for autonomous vehicle operation
US10423842B2 (en) Vehicle vision system with object detection
JP7107931B2 (en) Method and apparatus for estimating range of moving objects
CN107273788A (en) The imaging system and vehicle imaging systems of lane detection are performed in vehicle
US20230109473A1 (en) Vehicle, electronic apparatus, and control method thereof
US9892519B2 (en) Method for detecting an object in an environmental region of a motor vehicle, driver assistance system and motor vehicle
CN111160070A (en) Vehicle panoramic image blind area eliminating method and device, storage medium and terminal equipment
JP7183729B2 (en) Imaging abnormality diagnosis device
GB2625262A (en) Vehicle, control device, and method for evaluating a calibration of one or more cameras mounted to a vehicle
Rashed et al. VM-MODNet: Vehicle motion aware moving object detection for autonomous driving
US11832019B2 (en) Method for harmonizing images acquired from non overlapping camera views
CN109598747A (en) Moving object detection system, moving target detecting method and vehicle
TWM561801U (en) Vehicular virtual image display system
EP4188775B1 (en) Long-term visual trailer tracker for vehicle-trailer angle estimation
US20240070909A1 (en) Apparatus and method for distance estimation
CN116416584A (en) Reference value generation method and device for other traffic participants
JP2024137371A (en) Electronic device, distance measurement method, and distance measurement program
CN116416591A (en) Method and equipment for generating reference value for driving boundary