[go: up one dir, main page]
More Web Proxy on the site http://driver.im/

WO2022127918A1 - 双目相机的立体标定方法、装置、系统及双目相机 - Google Patents

双目相机的立体标定方法、装置、系统及双目相机 Download PDF

Info

Publication number
WO2022127918A1
WO2022127918A1 PCT/CN2021/139325 CN2021139325W WO2022127918A1 WO 2022127918 A1 WO2022127918 A1 WO 2022127918A1 CN 2021139325 W CN2021139325 W CN 2021139325W WO 2022127918 A1 WO2022127918 A1 WO 2022127918A1
Authority
WO
WIPO (PCT)
Prior art keywords
images
calibration
camera
image
thermal infrared
Prior art date
Application number
PCT/CN2021/139325
Other languages
English (en)
French (fr)
Inventor
郭晓阳
杨平
谢迪
浦世亮
Original Assignee
杭州海康威视数字技术股份有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 杭州海康威视数字技术股份有限公司 filed Critical 杭州海康威视数字技术股份有限公司
Publication of WO2022127918A1 publication Critical patent/WO2022127918A1/zh

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/80Analysis of captured images to determine intrinsic or extrinsic camera parameters, i.e. camera calibration
    • G06T7/85Stereo camera calibration
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10048Infrared image

Definitions

  • the embodiments of the present application relate to the technical field of computer vision, and in particular, to a stereo calibration method, device, and system for a binocular camera, and a binocular camera.
  • binocular cameras composed of visible light cameras and thermal infrared cameras are used in temperature measurement, ranging and other scenarios that require image depth calculation.
  • This binocular camera is a heterogeneous and heterogeneous binocular camera.
  • heterologous means that the signal sources of the two cameras that make up the binocular are different, the signal source of the visible light camera is visible light, the signal source of the thermal infrared camera is heat, and the heterogeneity refers to the resolution, Physical structures such as focal length, pixel size, and field of view are not identical.
  • the binocular camera composed of the visible light camera and the thermal infrared camera is a heterogenous and heterogeneous binocular camera, the physical structure of the visible light camera and the thermal infrared camera is quite different, and the calibration is difficult. The accuracy of the camera will also be affected when it measures distance and temperature. Therefore, it is necessary to propose a stereo calibration method for visible light and thermal infrared binocular cameras.
  • Embodiments of the present application provide a stereo calibration method, device, system and binocular camera for a binocular camera, which are effective for stereo calibration of visible light and thermal infrared cameras, and ensure the accuracy of stereo calibration.
  • the technical solution is as follows:
  • a stereo calibration method for a binocular camera includes a visible light camera and a thermal infrared camera, and the method includes:
  • each pair of initial images in the multiple pairs of initial images includes a visible light image and a thermal infrared image of the same object;
  • the external parameters of the binocular camera are calibrated, and the external parameters include the translation matrix and the rotation between the visible light camera and the thermal infrared camera. matrix;
  • the translation component along the optical axis direction in the translation matrix included in the external parameter is reduced to obtain an adjusted translation matrix, and the visible light camera and the adjusted translation matrix are calibrated according to the rotation matrix and the adjusted translation matrix. the respective rotation amounts of the thermal infrared cameras.
  • performing calibration point extraction on the multiple pairs of first images to obtain pixel coordinates of the calibration points extracted from the multiple pairs of first images including:
  • the pixel coordinates of the calibration point extracted from the any first image are determined according to the pixel coordinates of the contour in the any first image.
  • the calibration points to be extracted in each of the multiple pairs of first images are evenly distributed at equal intervals;
  • the determining the pixel coordinates of the calibration point extracted from the any first image according to the pixel coordinates of the contour in the any first image including:
  • a plurality of contours that are within a distance threshold and whose number is not less than the contour threshold in any of the first images are formed into contour families, to obtain a plurality of contour families in any of the first images;
  • any contour family in the any first image from multiple contour families in the any first image, obtain the target contour family with the closest distance to the any contour family, and calculate the any contour family.
  • the pixel abscissa difference and the pixel ordinate difference between the center of a contour family and the center of the target contour family, and the pixel abscissa difference and the pixel ordinate difference corresponding to any contour family are obtained, so The difference between the abscissa of the pixel and the difference of the ordinate of the pixel is not less than zero;
  • the resolutions of the visible light image and the thermal infrared image included in each pair of first images in the plurality of pairs of first images are uniform;
  • the multiple pairs of initial images are processed to obtain multiple pairs of first images with uniform imaging specifications, including:
  • the visible light images in the plurality of pairs of initial images are cropped to obtain the visible light images in the plurality of pairs of first images.
  • the method further includes:
  • each of the one or more pairs of third images includes a visible light image and a thermal infrared image of the same heat source.
  • determining the corresponding upsampling multiple of the thermal infrared camera according to the pixel size relationship of the same heat source in one or more pairs of second images including:
  • the upsampling multiple corresponding to the thermal infrared camera is determined.
  • the visible light is determined according to the resolution of the thermal infrared camera, the upsampling multiple, and the difference relationship between the pixel coordinates corresponding to the same heat source in one or more pairs of third images.
  • the cropped area corresponding to the camera including:
  • any one of the one or more pairs of third images calculate the difference between the pixel abscissas and the pixel ordinates of the same heat source in the any pair of third images, respectively. difference, obtain the abscissa difference and ordinate difference corresponding to any pair of third images;
  • the abscissa offset between the visible light camera and the thermal infrared camera is obtained.
  • the unified horizontal resolution is determined according to the original horizontal resolution of the thermal infrared camera and the upsampling multiple
  • the unified vertical resolution is determined according to the original vertical resolution of the thermal infrared camera and the upsampling multiple. straight resolution;
  • the visible light camera corresponds to the clipping area.
  • a stereo calibration device for a binocular camera comprising:
  • the specification unification module is used for processing multiple pairs of initial images to obtain multiple pairs of first images with unified imaging specifications, and each pair of initial images in the multiple pairs of initial images includes a visible light image and a thermal infrared image of the same object;
  • a calibration point extraction module configured to perform calibration point extraction on the multiple pairs of first images to obtain pixel coordinates of the calibration points extracted from the multiple pairs of first images
  • An external parameter calibration module configured to calibrate the external parameters of the binocular camera according to the pixel coordinates of the calibration points extracted from the multiple pairs of first images, the external parameters include the visible light camera and the thermal infrared camera between translation and rotation matrices;
  • the stereo correction module is used for reducing the translation component along the direction of the optical axis in the translation matrix included in the external parameter to obtain an adjusted translation matrix.
  • the respective rotation amounts of the visible light camera and the thermal infrared camera are calibrated.
  • the calibration point extraction module includes:
  • an outline extraction unit configured to extract outlines in each of the first images in the plurality of pairs of first images based on a binarization process
  • the calibration point determination unit is configured to, for any first image in the plurality of pairs of first images, determine, according to the pixel coordinates of the contour in the any first image, the extracted data from the any first image.
  • the pixel coordinates of the calibration point is configured to, for any first image in the plurality of pairs of first images, determine, according to the pixel coordinates of the contour in the any first image, the extracted data from the any first image. The pixel coordinates of the calibration point.
  • the calibration points to be extracted in each of the multiple pairs of first images are evenly distributed at equal intervals;
  • the calibration point determination unit includes:
  • the first processing subunit is used to form a contour family with a plurality of contours in the any first image that are within the distance threshold and whose number is not less than the contour threshold, and obtain the contours in the any first image.
  • the second processing subunit is configured to, for any contour family in the any first image, obtain the closest distance to the any contour family from the plurality of contour families in the any first image target contour family, calculate the pixel abscissa difference and the pixel ordinate difference between the center of the any contour family and the center of the target contour family, and obtain the pixel abscissa difference corresponding to any of the contour families and the pixel ordinate difference, the pixel abscissa difference and the pixel ordinate difference not less than zero;
  • a third processing subunit configured to calculate the sum of the pixel abscissa difference and the pixel ordinate difference corresponding to each contour family in the any first image, and obtain the corresponding contour family in the any first image. the calibration spacing;
  • the fourth processing subunit is used to remove the contour family whose gap between the corresponding calibration spacing and the reference spacing in any of the first images exceeds the spacing threshold, and determine the center of each remaining contour family as the extracted contour.
  • a calibration point, the center pixel coordinate of each remaining contour family is determined as the pixel coordinate of an extracted calibration point.
  • the resolutions of the visible light image and the thermal infrared image included in each pair of first images in the plurality of pairs of first images are uniform;
  • the specification unification module includes:
  • a scaling unit configured to upsample the thermal infrared images in the multiple pairs of initial images according to the upsampling multiples corresponding to the thermal infrared cameras, to obtain thermal infrared images in the multiple pairs of first images
  • a cropping unit configured to crop the visible light images in the plurality of pairs of initial images according to cropping regions corresponding to the visible light cameras, to obtain the visible light images in the plurality of pairs of first images.
  • the device further includes:
  • the scaling parameter determination module is configured to determine the corresponding upsampling multiple of the thermal infrared camera according to the pixel size relationship of the same heat source in one or more pairs of second images, each of the one or more pairs of second images.
  • the second image includes a visible light image and a thermal infrared image of the same heat source;
  • the cropping area determination module is configured to determine the original resolution of the thermal infrared camera, the upsampling multiple, and the difference relationship between the pixel coordinates corresponding to the same heat source in one or more pairs of third images.
  • the cropping area corresponding to the visible light camera, each of the one or more pairs of third images includes a visible light image and a thermal infrared image of the same heat source.
  • the scaling parameter determination module includes:
  • a ratio calculation unit configured to calculate the ratio between the pixel lengths of the same heat source in each of the one or more pairs of second images, and/or, to calculate the same heat source in the one or more pairs of second images obtaining at least one scaling ratio for the ratio between the pixel areas in each pair of the second images in the second image;
  • a parameter determination unit configured to determine an upsampling multiple corresponding to the thermal infrared camera according to the at least one scaling ratio value.
  • the crop region determination module includes:
  • a coordinate difference calculation unit configured to calculate, for any one of the one or more pairs of third images, the difference between the abscissas of the pixels of the same heat source in the any pair of third images, respectively The difference between the value and the vertical coordinate of the pixel, the abscissa difference and the vertical coordinate difference corresponding to any pair of third images are obtained;
  • an offset determination unit configured to obtain the abscissa offset between the visible light camera and the thermal infrared camera according to the abscissa difference values corresponding to the one or more pairs of third images, and according to the pair of or The ordinate difference values corresponding to the plurality of pairs of third images are obtained to obtain the ordinate offset between the visible light camera and the thermal infrared camera;
  • a resolution determination unit configured to determine the unified horizontal resolution according to the original horizontal resolution of the thermal infrared camera and the upsampling multiple, and determine the unified horizontal resolution according to the original vertical resolution of the thermal infrared camera and the upsampling multiple, to determine the unified vertical resolution;
  • a cropping area determination unit configured to determine the abscissa offset and ordinate offset between the visible light camera and the thermal infrared camera, and the unified horizontal resolution and the unified vertical resolution , and determine the cropping area corresponding to the visible light camera.
  • a calibration device is provided, and the calibration device is used to realize the image acquisition in the stereo calibration method of the binocular camera described above;
  • the calibration device includes a calibration plate and a supplementary light supplementary heat device
  • the calibration plate is a metal plate, holes are evenly distributed on the calibration plate at equal intervals, and the hole walls of the holes have an inclination angle;
  • the supplementary light supplementary heat device includes a supplementary light device, a reflective plate and a supplemental heat device, the supplementary light device is fixed on the back of the calibration plate, and the reflective plate is fixed at a distance of heat dissipation from the back of the calibration plate. position, the heat supplementing device is fixed on the back of the reflector;
  • the light supplement device is used to emit light to the reflector
  • the heat supplement device is used to emit heat
  • the reflector surface of the reflector is a diffuse reflector
  • the reflector is used to pass the diffuse reflector to all surfaces.
  • the calibration plate reflects the light and transfers heat to the calibration plate.
  • the calibration device is a single-board calibration device, and the single-board calibration device includes a calibration plate and a set of light and heat supplementation devices; or,
  • the calibration device is a first combination board calibration device, and the first combination board calibration device includes a plurality of calibration plates and multiple sets of light and heat compensation devices, the multiple calibration plates and the multiple sets of light and heat compensation devices.
  • the poses of the multiple calibration boards are different; or,
  • the calibration device is a second composite board calibration device, and the second composite board calibration device includes the plurality of calibration boards and a set of light and heat supplementation devices, and the positions of the plurality of calibration boards are different.
  • a stereo calibration system for a binocular camera includes the binocular camera and a processor;
  • the binocular camera includes a visible light camera and a thermal infrared camera, and the visible light camera and the thermal infrared camera capture a visible light image and a thermal infrared image of the same object as a pair of initial images;
  • the processor is configured to process multiple pairs of initial images to obtain multiple pairs of first images with uniform imaging specifications
  • the processor is further configured to perform calibration point extraction on the multiple pairs of first images to obtain pixel coordinates of the calibration points extracted from the multiple pairs of first images;
  • the processor is further configured to calibrate the external parameters of the binocular camera according to the pixel coordinates of the calibration points extracted from the plurality of pairs of first images, the external parameters include the visible light camera and the thermal infrared Translation matrix and rotation matrix between cameras;
  • the processor is further configured to reduce the translation component along the optical axis direction in the translation matrix included in the external parameter to obtain an adjusted translation matrix. According to the rotation matrix and the adjusted translation A matrix to calibrate the respective rotation amounts of the visible light camera and the thermal infrared camera.
  • a binocular camera is provided, and the binocular camera is calibrated according to the above-mentioned stereo calibration method.
  • the binocular camera includes a visible light camera and a thermal infrared camera.
  • a computer device in another aspect, includes a processor, a communication interface, a memory and a communication bus, and the processor, the communication interface and the memory communicate with each other through the communication bus , the memory is used to store a computer program, and the processor is used to execute the program stored in the memory, so as to realize the steps of the stereo calibration method of the binocular camera.
  • a computer-readable storage medium where a computer program is stored in the computer-readable storage medium, and when the computer program is executed by a processor, the steps of the stereo calibration method for the binocular camera are implemented.
  • a computer program product containing instructions, which, when run on a computer, cause the computer to execute the steps of the above-described stereo calibration method for a binocular camera.
  • the embodiment of the present application provides a calibration device, the calibration device reflects light and heat evenly through the reverse fill light and diffuse reflection surface, and ensures heat dissipation, so that the light and heat on the calibration board are uniform and stable.
  • the binocular camera The captured visible light images and thermal infrared images are clear and of high image quality.
  • the embodiment of the present application also provides a stereo calibration method for a binocular camera. First, the imaging specifications of the visible light camera and the thermal infrared camera are unified, so that the subsequent stereo calibration is performed on the premise that the imaging specifications of the two cameras are unified. be accurate and effective.
  • FIG. 1 is a schematic view of the board surface of a calibration plate provided by an embodiment of the present application
  • FIG. 2 is a schematic view of a hole cross-section of a calibration plate provided in an embodiment of the present application
  • FIG. 3 is a schematic diagram of a calibration device provided by an embodiment of the present application.
  • FIG. 4 is a schematic diagram of a board surface of a composite board provided by an embodiment of the present application.
  • FIG. 5 is a flowchart of a stereo calibration method for a binocular camera provided by an embodiment of the present application
  • FIG. 6 is a flowchart of another stereo calibration method for a binocular camera provided by an embodiment of the present application.
  • FIG. 7 is a schematic structural diagram of a stereo calibration device for a binocular camera provided by an embodiment of the present application.
  • FIG. 8 is a schematic structural diagram of a computer device provided by an embodiment of the present application.
  • Heterogeneous and heterogeneous binoculars refer to the different signal sources of the two cameras that form a binocular
  • heterogeneous binoculars refer to the resolution, focal length, pixel size, visual Physical structures such as field angles are not identical.
  • the binocular camera consisting of a visible light camera and a thermal infrared camera introduced in the embodiments of the present application is a heterogenous and heterogeneous binocular camera.
  • the signal source of the visible light camera is visible light
  • the signal source of the thermal infrared camera is heat
  • the physical structure of the visible light camera and the thermal infrared camera are quite different.
  • Stereo calibration such as calibrating the internal and external parameters of the two cameras, calibrating the respective rotations of the two cameras, etc.
  • the internal parameters include optical center, focal length, distortion matrix and other parameters
  • the external parameters include the translation matrix and rotation matrix between the two cameras
  • the external parameters are used to represent the translation and rotation relationship between the two cameras.
  • the result of the stereo calibration can correct the images of the two cameras to a state of epipolar alignment, that is, to correct the image planes of the two cameras to a coplanar alignment, which is convenient for subsequent stereo matching.
  • binocular cameras are widely used in temperature measurement and distance measurement and other scenarios. Whether it is used for temperature measurement or distance measurement, it is necessary to calculate the distance between the object being photographed and the binocular camera based on the parallax distance formula.
  • the distance formula is derived under the ideal situation of the binocular camera, that is, the two image planes of the two cameras are aligned coplanarly.
  • coplanar line alignment means that when the image planes of the two cameras are on the same horizontal plane, and the same physical point is projected on the two image planes, they should be on the same line of the two pixel coordinate systems.
  • the actual binocular camera there are no two image planes that are completely aligned with coplanar lines, so it is necessary to perform stereo calibration on the binocular camera.
  • the goal of stereo calibration is to align the actual non-coplanar lines.
  • the two image planes are corrected into coplanar alignment, that is, the actual binocular camera is corrected to an ideal binocular camera.
  • the stereo calibration of the visible light and thermal infrared binocular cameras is performed by taking an image of the calibration device.
  • the embodiment of the present application first designs a calibration device, and then introduces the calibration device.
  • the calibration device includes a calibration plate and a device for supplementing light and heat.
  • the calibration plate is introduced.
  • the calibration plate is a metal plate, and holes are evenly distributed on the calibration plate at equal intervals, and the hole walls of the holes have an inclined angle.
  • the holes on the calibration plate are round holes.
  • the inclination angle of the hole wall is any angle within 0° to 90°, such as 15°, 30°, 45°, 50° and the like. It should be noted that the embodiments of the present application do not limit the thickness of the calibration plate, the number of holes on the calibration plate, the shape of the holes, and the inclination angle of the holes.
  • FIG. 1 is a schematic view of a surface of a calibration plate provided by an embodiment of the present application
  • FIG. 2 is a schematic cross-sectional view of a hole of a calibration plate provided by an embodiment of the present application.
  • the thickness of the calibration plate is 5mm (mm)
  • the calibration plate is evenly distributed with 9*9 round holes
  • Figure 2 the calibration The circular holes on the plate have an inclined angle of 15°.
  • Figure 1 is a front view of the calibration plate, which represents the front of the calibration plate, that is, the side that the binocular camera needs to shoot.
  • Figure 2 is a cross-sectional view of the calibration plate from the right side. With an inclined angle, the aperture on the front of the calibration plate is smaller than the aperture on the back.
  • the light supplement heat device includes a light supplement device, a reflective plate and a heat supplement device.
  • the light supplement device is fixed on the back of the calibration plate, and the reflector plate is fixed on the The back of the calibration plate is separated by a heat dissipation distance, and the heat supplement device is fixed on the back of the reflector.
  • the supplementary light device is used to emit light to the reflective plate
  • the heat supplementary device is used to emit heat
  • the reflective surface of the reflective plate is a diffuse reflection surface
  • the reflective plate is used to reflect light toward the calibration plate through diffuse reflection, and to transfer heat to the calibration plate .
  • the fill light device is fixed on the back of the calibration plate near the edge, that is, between the outermost row of holes and the edge, to avoid the effect of the fill light device on light and heat, and reduce shadows in the image.
  • the heat dissipation distance can be set according to the heating power of the heating device. For example, if the heating power is larger, the heat dissipation distance can be set larger, and the heating power is smaller, and the heat dissipation distance can be set smaller, so that the heat transferred to the calibration plate and the calibration plate can be set smaller.
  • the heat dissipated is relatively balanced.
  • the supplementary light device is a supplementary light lamp, and the heating device is a heating patch.
  • FIG. 3 is a schematic diagram of a calibration device provided by an embodiment of the present application.
  • the plate on the left is a calibration plate
  • the back of the calibration plate is fixed with a fill light
  • the plate on the right is a reflector
  • the reflection surface of the reflector is processed into a diffuse reflection surface, that is, the surface on the left side of the reflector is Diffuse reflection surface, the diffuse reflection surface can reflect light evenly.
  • a heating patch is fixed on the back of the reflective plate, and the heating patch emits heat, and transmits heat to the calibration plate evenly and with high energy through the reflective plate, so that the light and heat on the calibration plate are uniform and stable.
  • the reflector plate can emit light and heat to the calibration plate in a uniform and high-energy manner, which ensures clear imaging of the visible light camera and the thermal infrared camera, and avoids simultaneous transmission of light and heat to the calibration plate.
  • the equipment blocks each other, resulting in uneven light and heat, ensuring uniform imaging.
  • the calibration device is a single-board calibration device
  • the single-board calibration device includes a calibration plate and a set of light and heat supplementation devices, such as the calibration device shown in FIG. 3 .
  • the embodiment of the present application also designs a composite board calibration device, which uses a plurality of calibration boards with different poses to arrange together to obtain a composite board, and designs a composite board calibration device based on the composite board. In this way, a plurality of images of calibration boards in different poses can be obtained in one shot, and there is no need to adjust the poses of the calibration device during the shooting process, which improves the efficiency.
  • the two combination board calibration devices provided in the embodiments of the present application will be introduced.
  • the calibration device is a first combination board calibration device
  • the first combination board calibration device includes a plurality of calibration plates and multiple sets of light and heat supplement devices
  • the multiple calibration boards and the multiple sets of light and heat supplement devices correspond one-to-one.
  • the poses of the multiple calibration boards are different.
  • a plurality of veneer calibration devices are arranged together to form a combined board calibration device, and the positions of the plurality of veneer calibration devices are different, that is, the angles of the plurality of veneer calibration devices are adjusted to make the multiple veneer calibration devices.
  • the veneer calibration devices form a certain angle with each other.
  • the calibration device is a second combination plate calibration device
  • the second combination plate calibration device includes a plurality of calibration plates and a set of light and heat supplement devices, and the positions of the plurality of calibration plates are different. That is, arranging calibration boards of different poses together to obtain a composite board, fixing a set of light-filling devices on the back of the composite board, fixing a reflector at a distance from the composite board for heat dissipation, and the reflector of the reflector.
  • a heating device is fixed on the back of the reflector.
  • the supplementary light device is fixed at a position close to the edge of the composite board to avoid the influence of the supplementary light device on light and heat.
  • the size of the reflector in the combination board calibration device matches the size of the combination board, and the heating device should also match the size of the combination calibration board. For example, the size of the heating patch matches the size of the combination board.
  • FIG. 4 is a schematic diagram of a board surface of a composite board provided by an embodiment of the present application.
  • the combination plate includes four calibration plates as shown in FIG. 1 , and a certain angle is formed between the calibration plates, that is, the postures of the calibration plates are different.
  • the embodiment of the present application provides a calibration device, and the calibration device is used to realize the acquisition of images in the stereo calibration method of the binocular camera.
  • the designed calibration device makes the light and heat on the calibration board uniform and stable through the reverse fill light, the diffuse reflection surface is uniform, the high-energy reflected light and heat, and the heat dissipation is ensured.
  • the infrared image is clear, that is, the image quality is high, which is beneficial to improve the calibration accuracy of the binocular camera.
  • FIG. 5 is a flowchart of a stereo calibration method for a binocular camera provided by an embodiment of the present application.
  • the method is applied to a calibration device, optionally, the calibration device is a binocular camera or a computer device. Please refer to FIG. 5 , the method includes the following steps.
  • Step 501 Process multiple pairs of initial images to obtain multiple pairs of first images with uniform imaging specifications, and each pair of initial images in the multiple pairs of initial images includes a visible light image and a thermal infrared image of the same object.
  • the binocular camera in the embodiment of the present application includes a visible light camera and a thermal infrared camera.
  • the binocular camera is a heterogeneous and heterogeneous binocular camera.
  • the imaging specifications of the visible light camera and the thermal infrared camera are different, for example, the resolution of the imaging of the two cameras is different. There are differences in rate, focal length, etc. In this case, stereo calibration cannot be performed directly, and the imaging specifications of the two cameras need to be unified.
  • the uniform imaging specification includes uniform resolution, that is, the resolution of the visible light image and the thermal infrared image included in each pair of first images is uniform.
  • the unified resolution means that the focal lengths of the two cameras are also unified. , that is, in this solution, while unifying the resolutions of the two cameras, it also unifies the focal lengths of the two cameras.
  • the imaging specifications of the visible light camera and the thermal infrared camera are converted into the same imaging specifications through cropping and scaling of the image, that is, the resolution and focal length are unified, and subsequent stereo calibration is performed based on the cropped and scaled image.
  • each pair of initial images in the multiple pairs of initial images includes a visible light image and a thermal infrared image of the same object
  • each pair of initial images is an image of the same object captured by the visible light camera and the thermal infrared camera at the same time.
  • the shooting object of the binocular camera may be the calibration device described above, that is, the initial image is the image of the calibration device. That is, when the light and heat supplement included in the calibration device is enabled, the binocular camera obtains an initial image by photographing the calibration plate included in the calibration device.
  • the original resolution of the thermal infrared camera is usually smaller than the original resolution of the visible light camera, and the field of view of the visible light camera is larger than the field of view of the thermal infrared camera.
  • the image is upsampled to enlarge the thermal infrared image, and the visible light image is cropped to reduce the visible light image, so that the resolution and focal length of the two cameras are unified.
  • the visible light image may also be reduced by cropping and down-sampling, so that the resolutions and focal lengths of the two cameras are unified.
  • the field of view of the thermal infrared camera is greater than the field of view of the visible light camera. Crop and downsample, or upsample the visible light image and crop the thermal infrared image, so that the resolution and focal length of the two cameras are the same.
  • an example is introduced by taking the magnification of the thermal infrared image and the cropping of the visible light image so that the resolutions and focal lengths of the two cameras are unified.
  • An implementation method of performing cropping and scaling processing on multiple pairs of initial images to obtain multiple pairs of first images with uniform imaging specifications is: according to the corresponding upsampling multiple of the thermal infrared camera, up-sampling the thermal infrared images in the multiple pairs of initial images. Sampling to obtain thermal infrared images in the pairs of first images; according to cropping regions corresponding to the visible light cameras, crop the visible light images in the pairs of initial images to obtain the visible light images in the pairs of first images.
  • an implementation manner of upsampling the thermal infrared images in the multiple pairs of initial images is: the pixel abscissa of the thermal infrared images in the multiple pairs of initial images. and the vertical coordinate of the pixel are multiplied by the upsampling multiple to enlarge the thermal infrared image, so that the resolution of the thermal infrared image in the obtained multiple pairs of second images is the unified resolution of the two cameras.
  • the upsampling multiple is scale
  • W ir and H ir respectively refer to the original horizontal resolution and original vertical resolution of the thermal infrared camera
  • W W ir ⁇ scale
  • the clipping area is represented by a clipping rectangle range parameter, such as (( ⁇ w, ⁇ w+W),( ⁇ h, ⁇ h+H)), where ( ⁇ w, ⁇ w+ W) and ( ⁇ h, ⁇ h+H) are used to determine the rectangular area that needs to be preserved for cropping. That is, the pixel points whose abscissas are between ⁇ w and ⁇ w+W are reserved, and the pixels whose ordinates are between ⁇ h and ⁇ h+H are reserved.
  • cropping and scaling parameters need to be determined.
  • the cropping and scaling parameters include the upsampling multiple corresponding to the thermal infrared camera and the cropping area corresponding to the visible light camera.
  • an implementation manner of determining the cropping and zooming parameters is: according to the pixel size relationship of the same heat source in one or more pairs of second images, determine the upsampling multiple corresponding to the thermal infrared camera; The original resolution and upsampling multiple of , and the difference relationship between the corresponding pixel coordinates of the same heat source in one or more pairs of third images, determine the cropping area corresponding to the visible light camera.
  • each pair of second images in the one or more pairs of second images includes visible light images and thermal infrared images of the same heat source
  • each pair of third images in the one or more pairs of third images includes visible light images of the same heat source images and thermal infrared images.
  • the heat sources in the second and third images are different.
  • the heat source in the second image and the third image is the same, that is, the second image and the third image are images obtained by photographing the same heat source, and the heat source may be the calibration device described above.
  • a visible light camera and a thermal infrared camera are used to capture the same heat source placed far away at the same time to obtain one or more pairs of third images.
  • the distance between the heat source and the binocular camera should be as far as possible.
  • there is no need to set a parallax threshold and the heat source may be placed farther away, for example, 20 meters, 25 meters, or 30 meters away from the binocular camera.
  • the second image use a visible light camera and a thermal infrared camera to capture a heat source with a fixed length or a fixed area at the same time, and ensure that the pixel length or pixel area of the heat source in the image can be calculated from the captured image.
  • the heat source is placed farther away.
  • the reason for placing the heat source at a distance to capture the third image is explained here: since the distance formula (which can be called the ranging formula) is calculated according to the parallax shown in formula (1), the baseline of the two cameras is a certain distance from the baseline, and the focal length is f. In the same situation, if the heat source is placed at infinity, that is, when the distance depth is infinite, the disparity disp of the two cameras is close to zero. Since the pixel sizes of the two cameras are different, the difference between the abscissas of the pixels corresponding to the same heat source in each pair of second images is non-zero, so the obtained parallax is non-zero.
  • the difference relationship represents the offset between the pixel coordinates of the two cameras.
  • the visible light image is processed according to the offset, so that the processed paired visible light images and
  • the difference between the pixel coordinates of the thermal infrared image is zero to eliminate the parallax between the two cameras, and similar processing is performed for the ordinate, so that the horizontal and vertical resolutions of the visible light image and the thermal infrared image are consistent.
  • an implementation method for determining the corresponding upsampling multiple of the thermal infrared camera is introduced. Based on the foregoing introduction, in this embodiment of the present application, the ratio between the pixel lengths of the same heat source in each pair of second images in the one or more pairs of second images is calculated, and/or the same heat source is calculated in one pair of second images. or a ratio between the pixel areas in each pair of the second images of the plurality of second images, resulting in at least one scaling ratio. Then, according to the at least one scaling ratio value, the upsampling multiple corresponding to the thermal infrared camera is determined.
  • the embodiment of the present application determines the corresponding upsampling multiple of the thermal infrared image based on the ratio of the pixel length and/or the pixel area of the same object in the visible light image and the thermal infrared image.
  • the ratio between the pixel lengths and/or the ratio between the pixel areas can be used to represent the pixel size relationship between the visible light image and the thermal infrared image pair.
  • the heat source needs to have a certain length, and for any pair of second images, the pixel length of the heat source in the visible light image is calculated to be higher than the pixel length of the heat source in the thermal infrared image, A zoom ratio value corresponding to the pair of second images is obtained, so that for one or more pairs of second images, one or more zoom ratio values can be correspondingly obtained.
  • the heat source needs to have a certain length and width.
  • the ratio of the pixel area of the heat source in the visible light image to the pixel area of the heat source in the thermal infrared image is calculated to obtain the A zoom ratio value corresponding to the second image, so that for one or more pairs of the second image, one or more zoom ratio values can also be obtained correspondingly.
  • the heat source also needs to have a certain length and width. For any pair of second images, calculate the ratio of the pixel length of the heat source in the visible light image to the pixel length of the heat source in the thermal infrared image.
  • the zoom ratio is determined as the upsampling multiple. If multiple scaling ratio values are finally obtained for all the second images, the average value of the multiple scaling ratio values may be determined as the upsampling multiple.
  • the visible light camera is determined according to the resolution and upsampling multiple of the thermal infrared camera, and the difference relationship between the pixel coordinates corresponding to the same heat source in one or more pairs of third images.
  • An implementation manner of the corresponding cropping area is: for any one of the one or more pairs of third images, the difference between the pixel abscissas of the same heat source in the pair of third images is calculated respectively. and the difference between the ordinates of the pixels, to obtain the abscissa difference and ordinate difference corresponding to the pair of third images.
  • the abscissa offset between the visible light camera and the thermal infrared camera is obtained, and according to the ordinate difference corresponding to the one or more pairs of third images, the The ordinate offset between the visible light camera and the thermal infrared camera.
  • the unified horizontal resolution is determined according to the original horizontal resolution and upsampling multiple of the thermal infrared camera
  • the unified vertical resolution is determined according to the original vertical resolution and upsampling multiple of the thermal infrared camera.
  • the cropping area corresponding to the visible light camera is determined.
  • the difference relationship is represented by the difference of the abscissa and the difference of the ordinate.
  • the offset of the two cameras in the direction of the abscissa of the pixel and the offset of the direction of the ordinate of the pixel are determined. That is, the abscissa offset and the ordinate offset are determined.
  • a cropping area is determined, so that the resolution of the visible light image retained after cropping according to the cropping area is the unified resolution.
  • ensuring the same resolution also means the same focal length, so for a distant heat source, at this focal length, the parallax calculated by the two cameras is close to zero.
  • the difference between the abscissas of the pixels of the same heat source in the pair of third images is calculated to obtain an abscissa difference value ⁇ w, and the abscissa difference value ⁇ w is obtained.
  • the value ⁇ w is determined as the abscissa offset between the two cameras.
  • the difference between the pixel ordinates of the heat source in the pair of third images is calculated to obtain an ordinate difference ⁇ h, and the ordinate difference ⁇ h is determined as the ordinate offset between the two cameras .
  • the unified horizontal resolution and vertical resolution are W and H, respectively, then (( ⁇ w, ⁇ w+W), ( ⁇ h, ⁇ h+H)) is determined as the cropping area corresponding to the visible light camera.
  • the heat source since the heat source is far away from the binocular camera when the third image is captured, the heat source can be regarded as a point in the third image, and the pixel coordinates of the heat source can also be used in the image.
  • the above describes an implementation manner of performing cropping and scaling processing on multiple pairs of initial images to obtain multiple pairs of first images with uniform imaging specifications.
  • the initial image may be the image of the calibration device. From the above description of the calibration device, it can be known that the calibration device is a single-board calibration device or a combined-board calibration device. If the calibration device is a single-board calibration device, the initial image includes a calibration plate. The first image is also an image including a calibration plate, in this case, step 502 can be continued.
  • an initial image is an image including multiple calibration boards, that is, the initial image is an image of the composite board directly captured by the camera, and the first image obtained after processing the initial image is also composed of multiple calibration boards. image of a calibration plate.
  • the visible light image and thermal infrared image included in each pair of first images need to be cropped respectively to obtain the visible light and thermal infrared image pairs of the calibration plate in multiple poses, and crop Each pair of images obtained corresponds to one calibration plate among the plurality of calibration plates.
  • the plurality of pairs of images obtained after cropping are used as the plurality of pairs of first images after cropping, and the first image at this time is a single-board image, and step 502 is continued based on the plurality of pairs of first images after being cropped.
  • step 502 is continued based on the plurality of pairs of first images after being cropped.
  • FIG. 4 by cropping a pair of first images, 4 pairs of visible light and thermal infrared image pairs can be obtained. These 4 pairs of images correspond to the 4 calibration plates included in the combination board respectively.
  • step 502 based on the 4 pairs of first images.
  • the calibration device is a composite board calibration device
  • the images directly obtained by the two cameras are taken as multiple pairs of original images, that is, the original images are the composite board images.
  • multiple pairs of original images are cropped to obtain multiple pairs of initial images, that is, the initial image is a single-board image that has been cropped based on the image of the composite board, and an initial image is an image including a single calibration board, which is obtained after processing the initial image.
  • the first image is also an image including a single calibration board, that is, the first image is already a single-board image, in this case, step 502 is continued.
  • four pairs of visible light and thermal infrared images can be obtained by cropping a pair of original images, and these 4 pairs of images correspond to the 4 calibration boards included in the composite board respectively, and these 4 pairs of images are used as 4 pairs of images.
  • the 4 pairs of initial images are processed to obtain 4 pairs of first images, and step 502 is continued based on the 4 pairs of first images.
  • the composite board image can be cropped first to obtain the single board image, and then the single board image can be processed to obtain the first image.
  • first process the combined board image to obtain a first image including multiple calibration board images and then crop the obtained first image to obtain a single calibration board image. 's first image.
  • the step of cropping the combined board image to obtain the single board image may be completed before step 502 is performed.
  • auxiliary lines are marked on the composite board included in the composite board calibration device, and each calibration board is divided into different areas based on the auxiliary lines. 4 regions.
  • an implementation manner of cropping the composite board image to obtain the veneer image is: extracting auxiliary lines in the composite board image, and cropping the composite board image according to the extracted auxiliary lines to obtain the veneer image. It should be noted that the embodiments of the present application take this implementation manner as an example to describe the cropping of the combined board image, but this implementation manner does not limit the embodiments of the present application.
  • the process of calibrating and unifying the imaging specifications of the visible light camera and the thermal infrared camera is understood as the process of unifying the camera models.
  • the camera model can be understood as the camera's resolution, focal length and other structures.
  • Step 502 Extract calibration points from the multiple pairs of first images to obtain pixel coordinates of the calibration points extracted from the multiple pairs of first images.
  • calibration points are extracted for the multiple pairs of first images to obtain pixel coordinates of the calibration points extracted from the multiple pairs of first images.
  • the calibration device provided in the embodiment of the present application does not constitute a limitation on the stereo calibration method of the binocular camera provided by the embodiment of the present application.
  • other calibration devices may also be used. as the subject of the binocular camera.
  • there will be relatively obvious calibration points on the calibration device and there is a relatively obvious gap between the pixel value (gray value) of the calibration point in the image of the calibration plate and the pixel value around the calibration point. Based on this, the embodiment of the present application provides a The method of calibration point extraction will be introduced next.
  • an implementation manner of performing calibration point extraction on the multiple pairs of first images to obtain the pixel coordinates of the calibration points extracted from the multiple pairs of first images is: based on the binarization processing method, Extracting the contour in each of the first images of the plurality of pairs of first images, for any first image in the plurality of pairs of first images, according to the pixel coordinates of the contour in the any first image, determine the any one of the first images.
  • the contour to be extracted refers to the contour of the target object in the first image, and the calibration point to be extracted represents the target object in the image.
  • the image is binarized based on a binarization threshold, and the contour of the image is extracted.
  • the hole is used as the target object, and the The center is the calibration point to be extracted, and the extracted contour is the contour of the hole on the calibration plate in the image.
  • the center pixel coordinates of the contour are calculated based on the pixel coordinates of a plurality of pixel points composing the contour, and the pixel coordinates of the calibration point corresponding to the contour are obtained, That is, the pixel coordinates of a calibration point extracted from the image are obtained.
  • the contours in the first image can be extracted based on multiple binarization thresholds.
  • more than one contour can be extracted based on multiple binarization thresholds, and the centers of these contours are consistent , these contours are formed into a contour family, and the pixel coordinates of a calibration point corresponding to the contour family can be obtained by calculating the center pixel coordinates of the contour family.
  • the initial value and termination value of the binarization threshold are set, and the grayscale step is set.
  • the binarization threshold is updated from the initial value to the termination value based on the grayscale step, that is, step by step. Walk through the image long, binarize the image, and extract the contours. After extracting the contour in the image, for any contour, obtain the contour whose distance between the center of other contours in the image and the center of the contour is within the distance threshold, and form a contour family with the contour, that is, the combined distance Contours within the distance threshold form a family of contours.
  • the distance threshold can be 5 pixels
  • the grayscale step can be 10
  • the initial value and termination value of the binarization threshold can be 100 and 200, respectively.
  • the distance threshold, grayscale step size, initial value and termination value of the binarization threshold can also be set to other values.
  • the calibration points to be extracted in each of the multiple pairs of first images are evenly distributed at equal intervals, then ideally, the contour or contour family in any first image is also It should be evenly spaced.
  • the contours or contour families in the image can also be screened, the contours or contour families with obvious deviations can be eliminated, and the remaining contours or contour families can be extracted based on the remaining contours or contour families.
  • the pixel coordinates of the calibration point we will introduce this by taking the filtering of contour families in the image as an example.
  • an implementation manner of determining the pixel coordinates of the calibration point extracted from the any first image is: In the image, a plurality of contours whose distance is within the distance threshold and whose number is not less than the contour threshold are formed into contour families, and multiple contour families in any first image are obtained. For any contour family in the any first image, from a plurality of contour families in the any first image, obtain the target contour family with the closest distance to the any contour family, and calculate the relationship between the any contour family and the contour family.
  • the pixel abscissa difference and the pixel ordinate difference between the target contour families are obtained, and the pixel abscissa difference and the pixel ordinate difference corresponding to any contour family are obtained.
  • the center pixel coordinate of the contour family is determined as the pixel coordinate of the extracted calibration point.
  • the difference between the pixel abscissa and the pixel ordinate is not less than zero, that is, the absolute value of the difference between the pixel coordinates is calculated.
  • the binarization threshold is updated based on the gray step size th step , the contours in the image are extracted, and the contours in the image that are within the distance threshold are formed into candidate contour families, and also That is, the contours whose distance is within the distance threshold th dist are merged to form a candidate contour family.
  • Traverse the candidate contour families in the image remove the candidate contour families whose number of contours in the family is less than the contour threshold th num , and determine the remaining candidate contour families as the contour families in the image.
  • the contour family Traverse the contour family in the image, for any contour family (for the convenience of description, the contour family will be referred to as the first contour family in the following), find the closest contour family to the first contour family from other contour families in the image. For the target contour family, calculate the pixel abscissa difference and pixel ordinate difference corresponding to the centers of the two contour families, and obtain the pixel abscissa difference ⁇ u and pixel ordinate difference ⁇ v, ⁇ u and ⁇ v corresponding to the first contour family. is an absolute value.
  • the calibration points are evenly distributed at equal intervals, if the contour family is accurate, the sum of the pixel abscissa difference and the pixel ordinate difference corresponding to each contour family is always close regardless of the orientation of the calibration board. That is, ⁇ u i + ⁇ v i and ⁇ u j + ⁇ v j are always close, where i and j respectively represent any two contour families in an image. Based on this, the sum of the pixel abscissa difference ⁇ u and the pixel ordinate difference ⁇ v corresponding to each contour family in the image is calculated, and the calibration interval ⁇ u+ ⁇ v corresponding to the corresponding contour family is obtained.
  • the contour family whose difference between the corresponding calibration spacing and the reference spacing exceeds the spacing threshold in the image is eliminated, that is, the contour family whose corresponding calibration spacing is obviously deviated is eliminated.
  • the center of each remaining contour family is a finally extracted calibration point, and the center pixel coordinate of each remaining contour family is the pixel coordinate of a finally extracted calibration point.
  • the contour family in the image is traversed, and each time it is judged whether a contour family needs to be eliminated.
  • the reference distance is an average value of the calibration distances corresponding to the contour family in the image.
  • the reference distance is the average value of the calibration distances of other contour families in the image except the second contour family, which is the contour family that needs to be judged whether to be eliminated this time.
  • the reference distance is the average value of the calibration distances of the remaining contour families in the image that have not yet been judged to be culled.
  • the reference distance is the average value of the calibration distances of the remaining contour families in the image except the second contour family that has not yet been judged whether to be eliminated.
  • the second contour family is the contour family that needs to be judged to be eliminated this time.
  • Step 503 Calibrate external parameters of the binocular camera according to the pixel coordinates of the calibration points extracted from the pairs of first images, where the external parameters include a translation matrix and a rotation matrix between the visible light camera and the thermal infrared camera.
  • the pixel coordinates of the calibration points in each of the multiple pairs of first images are obtained.
  • the external parameters of the binocular camera are calibrated according to the pixel coordinates of the calibration points extracted from the pairs of first images.
  • the internal parameters of the binocular camera can also be calibrated according to the pixel coordinates of the calibration points extracted from the pairs of first images, that is, the internal parameters of the visible light camera and the thermal infrared camera.
  • the respective internal parameters of the two cameras are calibrated, which is not limited in this embodiment of the present application.
  • the world coordinates of the calibration points and the positional relationship between the calibration points are known.
  • the calibration points are evenly distributed at equal intervals.
  • the calibration points extracted from each first image can be determined.
  • the mapping relationship between pixel coordinates and world coordinates After that, according to the mapping relationship between the extracted pixel coordinates of the calibration point and the world coordinates, the internal and external parameters of the binocular camera are solved.
  • the set of pixel coordinates of all the calibration points extracted from a first image is ⁇ (u i ,v i ) ⁇ , where i takes different values to represent all the calibration points.
  • the world coordinate of the upper left calibration point on a calibration board is (0, 0, 0)
  • the world coordinates of other calibration points on the calibration board can also be determined.
  • the mapping relationship between the pixel coordinates of each calibration point in the corresponding first image and the world coordinates can be obtained According to these data, the internal and external parameters of the binocular camera can be solved.
  • Zhang's calibration method is used for solving the internal reference.
  • the LM (Levenberg-Marquardt) method is used to optimize the results, and accurate internal parameters are obtained for both the visible light camera and the thermal infrared camera.
  • the respective translation matrices and rotation matrices of the two cameras are obtained.
  • the translation matrix and rotation matrix of the visible light camera are denoted as R i l and T i l respectively
  • the translation matrix and rotation matrix of the thermal infrared camera are denoted as R i r and T i r respectively, where the Different values of i represent different first image pairs.
  • other methods may also be used to solve the internal reference.
  • the subscripts l and r represent the relevant data of the visible light camera and the thermal infrared camera, respectively.
  • Step 504 Reduce the translation component along the optical axis in the translation matrix included in the external parameter to obtain an adjusted translation matrix, and calibrate the visible light camera and the thermal infrared camera according to the rotation matrix and the adjusted translation matrix. amount of rotation.
  • the respective rotation amounts of the two cameras need to be calculated.
  • the respective rotation amounts of the two cameras are solved based on the Bouguet algorithm.
  • the principle of the standard Bouguet algorithm is introduced.
  • the correction matrix of the visible light camera is the correction of the rotation matrix of the visible light camera obtained simultaneously in the internal parameter calculation in step 503, and the rotation amount of the visible light camera is obtained after the correction.
  • the correction matrix of the thermal infrared camera is the correction of the rotation matrix of the thermal infrared camera obtained simultaneously in the internal parameter calculation in step 503, and the rotation amount of the thermal infrared camera is obtained after the correction.
  • R l ' and R r ' are the rotation amount of the visible light camera and the rotation amount of the thermal infrared camera obtained by calibration, respectively.
  • the respective projection matrices of the two cameras can be solved.
  • the projection matrix P l R l ' ⁇ M l of the visible light camera
  • the projection matrix P r R r ' ⁇ M r of the thermal infrared camera, where M l is the internal parameter of the visible light camera, and Mr is the thermal infrared camera.
  • M l is the internal parameter of the visible light camera
  • Mr is the thermal infrared camera.
  • the standard Bouguet algorithm is introduced above.
  • the standard Bouguet algorithm can realize the state of epipolar alignment of the images of the binocular camera, and retain the maximum common area of the views of the two cameras.
  • the physical structures of the two cameras are very different, and the translation component T z along the optical axis in the translation matrices T of the two cameras may be very large.
  • an embodiment of the present application proposes a stereo correction method that retains the optical center offset, which will be introduced next.
  • the translation component along the optical axis direction in the translation matrix included in the extrinsic parameters is reduced to obtain an adjusted translation matrix.
  • the rotation matrix and the adjusted translation matrix included in the parameters are used to determine the respective rotation amounts of the visible light camera and the thermal infrared camera.
  • a method of reducing the translation component representing the direction of the optical axis in the translation matrix included in the external parameter is: the translation component representing the direction along the optical axis in the translation matrix included by the external parameter is set to zero.
  • the reduction process may refer to reduction to a value smaller than the original value and not smaller than zero.
  • the translation component T z in T is set to zero, and then the rotation adjustment matrix R rect is calculated, and then R l ' and R r ' are obtained by calculation.
  • the offset along the optical axis direction is not processed during the stereo correction process, and the corrected image can retain a larger area.
  • the binocular camera when the binocular camera is applied to calculate the image depth, it is based on the parallax distance formula, and the premise of the parallax distance formula introduced in the aforementioned formula (1) is that the stereo correction is based on translation.
  • the component T z processes the offset along the optical axis direction, and if the offset along the optical axis direction is not processed during the stereo calibration process, that is, if T z is set to zero during the stereo calibration, it needs to be re- Determine the parallax distance formula, that is, re-determine the mapping relationship between the parallax and the distance of the object to be photographed.
  • the binocular camera after calibrating the internal parameters and external parameters of the binocular camera, according to the translation matrix included in the calibrated external parameters, determine the binocular camera.
  • the process of re-deriving the disparity distance formula will be introduced.
  • the coordinates of the measured point A in the camera coordinate system of a camera are (X, Y, Z), where Z is the distance from the measured point A to the camera. Distance depth.
  • the abscissa formula of the pixel imaged by the measured point A in the camera is:
  • the formula for pixel ordinate is where f is the focal length of the camera.
  • the pixel ordinates of the imaging points of the measured point A in the two cameras are the same, and only the abscissa is different. Assume that the measured point A is in the visible light camera. and the pixel abscissas in the thermal infrared camera are u l and ur , respectively.
  • the focal length f of the two cameras is the same, and the optical center is also the same. Let the coordinates of the optical center be (u 0 , v 0 ). Then set the calibrated translation matrices of the two cameras as (T x , T y , T z ) T , and the direction is from the left camera (visible light camera) to the right camera (thermal infrared camera).
  • the calculation formulas of u l and ur r are: Subtract the two formulas to get the parallax so
  • FIG. 6 is a flowchart of another stereo calibration method for a binocular camera provided by an embodiment of the present application.
  • a calibration image (such as multiple pairs of initial images in the foregoing embodiment) is input, and the imaging specifications of the calibration image are unified. Processing, that is, to unify the camera model.
  • the calibration image is a composite board image, the calibration image needs to be cropped to obtain a single board image.
  • the calibration point is extracted from the single-board image, and the pixel coordinates of the extracted calibration point are obtained, and the internal and external parameters of the binocular camera are calculated according to the extracted pixel coordinates of the calibration point.
  • stereo correction is performed to preserve the optical center offset, and the respective rotation amounts of the two cameras are determined. In this way, the two calibrated cameras can be used for stereo correction of the image, that is, a corrected image can be obtained.
  • stereo-calibrated binocular cameras are used to capture images, and the stereo-calibrated binocular cameras can be used in scenes where image depth needs to be calculated, such as for distance measurement, temperature measurement and other scenes of shooting objects The accuracy of distance measurement and temperature measurement can be very high.
  • the embodiments of the present application provide a stereo calibration method for a binocular camera.
  • the imaging specifications of the visible light camera and the thermal infrared camera are unified, so that the subsequent stereo calibration is performed on the premise that the imaging specifications of the two cameras are unified.
  • the stereo calibration can be accurate and effective.
  • the stereo calibration device 700 for a binocular camera can be implemented as part or all of a computer device by software, hardware, or a combination of the two.
  • the apparatus 700 includes: a specification unification module 701 , a calibration point extraction module 702 , an external parameter calibration module 703 and a stereo correction module 704 .
  • a specification unification module 701 configured to process multiple pairs of initial images to obtain multiple pairs of first images with unified imaging specifications, where each pair of initial images in the multiple pairs of initial images includes a visible light image and a thermal infrared image of the same object;
  • a calibration point extraction module 702 configured to perform calibration point extraction on the multiple pairs of first images to obtain pixel coordinates of the calibration points extracted from the multiple pairs of first images;
  • the external parameter calibration module 703 is used to calibrate the external parameters of the binocular camera according to the pixel coordinates of the calibration points extracted from the multiple pairs of first images, and the external parameters include the translation matrix between the visible light camera and the thermal infrared camera and rotation matrix;
  • the stereo correction module 704 is used for reducing the translation component representing the direction of the optical axis in the translation matrix included in the external parameter to obtain an adjusted translation matrix, and calibrate the visible light camera according to the rotation matrix and the adjusted translation matrix and the respective rotation amount of the thermal infrared camera.
  • the calibration point extraction module 702 includes:
  • an outline extraction unit used for extracting outlines in each of the first images in the plurality of pairs of first images based on a binarization process
  • the calibration point determination unit is used for, for any first image in the plurality of pairs of first images, according to the pixel coordinates of the contour in the any first image, to determine the calibration point extracted from the any first image. pixel coordinates.
  • the calibration points to be extracted in each of the multiple pairs of first images are evenly distributed at equal intervals;
  • the calibration point determination unit includes:
  • the first processing subunit is used to form a contour family with a plurality of contours in any first image that are within a distance threshold and whose number is not less than the contour threshold, and obtain a plurality of contours in any first image. contour family;
  • the second processing subunit is configured to, for any contour family in the any first image, obtain the target contour family with the closest distance to the any contour family from a plurality of contour families in the any first image , calculate the pixel abscissa difference and the pixel ordinate difference between the center of any contour family and the center of the target contour family, and obtain the pixel abscissa difference and pixel ordinate difference corresponding to any contour family , the difference between the abscissa of the pixel and the ordinate of the pixel is not less than zero;
  • the third processing subunit is used to calculate the sum of the pixel abscissa difference and the pixel ordinate difference corresponding to each contour family in the any first image, and obtain the calibration corresponding to the corresponding contour family in the any first image spacing;
  • the fourth processing subunit is used to eliminate the contour family whose gap between the corresponding calibration spacing and the reference spacing in any first image exceeds the spacing threshold, and determine the center of each remaining contour family as the extracted one. For the calibration point, the center pixel coordinate of each remaining contour family is determined as the pixel coordinate of an extracted calibration point.
  • the resolutions of the visible light image and the thermal infrared image included in each pair of first images in the plurality of pairs of first images are unified;
  • Specification unification modules include:
  • a scaling unit configured to upsample the thermal infrared images in the multiple pairs of initial images according to the upsampling multiples corresponding to the thermal infrared cameras, to obtain thermal infrared images in the multiple pairs of first images
  • the cropping unit is configured to crop the visible light images in the multiple pairs of initial images according to the cropping regions corresponding to the visible light cameras, to obtain the visible light images in the multiple pairs of first images.
  • the apparatus 700 further includes:
  • the scaling parameter determination module is used to determine the corresponding upsampling multiple of the thermal infrared camera according to the pixel size relationship of the same heat source in one or more pairs of second images, and each pair of the first image in the one or more pairs of second images
  • the second image includes visible light image and thermal infrared image of the same heat source
  • a cropping area determination module configured to determine the visible light camera according to the original resolution of the thermal infrared camera, the upsampling multiple, and the difference relationship between the pixel coordinates corresponding to the same heat source in one or more pairs of third images Corresponding cropped regions, each of the one or more pairs of third images includes a visible light image and a thermal infrared image of the same heat source.
  • the scaling parameter determination module includes:
  • a ratio calculation unit configured to calculate the ratio between the pixel lengths of the same heat source in each pair of the second images in the one or more pairs of second images, and/or, calculate the same heat source in the one or more pairs of first images.
  • a parameter determination unit configured to determine an upsampling multiple corresponding to the thermal infrared camera according to the at least one scaling ratio value.
  • the cropping region determination module includes:
  • the coordinate difference calculation unit is configured to, for any one of the one or more pairs of third images, respectively calculate the difference between the pixel abscissas of the same heat source in the any pair of third images and the difference between the ordinates of the pixels, to obtain the abscissa difference and ordinate difference corresponding to any pair of third images;
  • an offset determination unit configured to obtain the abscissa offset between the visible light camera and the thermal infrared camera according to the abscissa difference value corresponding to the one or more pairs of third images, according to the one or more pairs of third images
  • the ordinate difference value corresponding to the image is obtained to obtain the ordinate offset between the visible light camera and the thermal infrared camera
  • a resolution determination unit configured to determine the unified horizontal resolution according to the original horizontal resolution of the thermal infrared camera and the upsampling multiple, and determine the unified horizontal resolution according to the original vertical resolution of the thermal infrared camera and the upsampling multiple vertical resolution after
  • a cropping area determination unit configured to determine the corresponding visible light camera according to the abscissa offset and ordinate offset between the visible light camera and the thermal infrared camera, as well as the unified horizontal resolution and the unified vertical resolution the clipping area.
  • the embodiment of the present application provides a stereo calibration method for a binocular camera.
  • the imaging specifications of the visible light camera and the thermal infrared camera are calibrated and unified, so that the subsequent imaging specifications of the two cameras are unified.
  • Stereo calibration stereo calibration can be accurate and effective.
  • the physical structure of the visible light camera and the thermal infrared camera are quite different, so the translation component along the optical axis direction is reduced, and on this basis, the respective rotation amounts of the two cameras are determined. .
  • the accuracy of the stereo calibration is higher.
  • the stereo calibration device of the binocular camera provided by the above embodiment performs stereo calibration on the binocular camera
  • only the division of the above functional modules is used as an example for illustration. In practical applications, the above functions can be used as required.
  • the allocation is completed by different functional modules, that is, the internal structure of the device is divided into different functional modules, so as to complete all or part of the functions described above.
  • the stereo calibration device for a binocular camera provided by the above embodiment and the stereo calibration method for a binocular camera belong to the same concept, and the specific implementation process is detailed in the method embodiment, which will not be repeated here.
  • FIG. 8 is a schematic structural diagram of a computer device provided by an embodiment of the present application.
  • the computer device is used to perform stereo calibration on the binocular camera, that is, the computer device is used to implement the stereo calibration method of the binocular camera in the foregoing embodiment. Specifically:
  • Computer device 800 includes a central processing unit (CPU) 801, system memory 804 including random access memory (RAM) 802 and read only memory (ROM) 803, and a system bus 805 connecting system memory 804 and central processing unit 801.
  • Computer device 800 also includes a basic input/output system (I/O system) 806 that facilitates the transfer of information between various devices within the computer, and a mass storage device for storing operating system 813, application programs 814, and other program modules 815 807.
  • CPU central processing unit
  • system memory 804 including random access memory (RAM) 802 and read only memory (ROM) 803
  • system bus 805 connecting system memory 804 and central processing unit 801.
  • Computer device 800 also includes a basic input/output system (I/O system) 806 that facilitates the transfer of information between various devices within the computer, and a mass storage device for storing operating system 813, application programs 814, and other program modules 815 807.
  • I/O system basic input/output system
  • Basic input/output system 806 includes a display 808 for displaying information and input devices 809 such as a mouse, keyboard, etc., for user input of information. Both the display 808 and the input device 809 are connected to the central processing unit 801 through the input and output controller 810 connected to the system bus 805 .
  • the basic input/output system 806 may also include an input output controller 810 for receiving and processing input from various other devices such as a keyboard, mouse, or electronic stylus. Similarly, input output controller 810 also provides output to a display screen, printer, or other type of output device.
  • Mass storage device 807 is connected to central processing unit 801 through a mass storage controller (not shown) connected to system bus 805 .
  • Mass storage device 807 and its associated computer-readable media provide non-volatile storage for computer device 800 . That is, mass storage device 807 may include a computer-readable medium (not shown) such as a hard disk or a CD-ROM drive.
  • Computer-readable media can include computer storage media and communication media.
  • Computer storage media includes volatile and nonvolatile, removable and non-removable media implemented in any method or technology for storage of information such as computer readable instructions, data structures, program modules or other data.
  • Computer storage media include RAM, ROM, EPROM, EEPROM, flash memory or other solid state storage technology, CD-ROM, DVD or other optical storage, magnetic cassettes, magnetic tape, magnetic disk storage or other magnetic storage devices.
  • RAM random access memory
  • ROM read only memory
  • EPROM Erasable programmable read-only memory
  • EEPROM electrically erasable programmable read-only memory
  • the computer device 800 may also operate through a network connection to a remote computer on a network, such as the Internet. That is, computer device 800 may be connected to network 812 through network interface unit 811 connected to system bus 805, or may use network interface unit 811 to connect to other types of networks or remote computer systems (not shown).
  • the above-mentioned memory also includes one or more programs, and the one or more programs are stored in the memory and configured to be executed by the CPU.
  • the one or more programs include instructions for performing the stereo calibration method for the binocular camera provided by the embodiments of the present application.
  • a computer-readable storage medium is also provided, and a computer program is stored in the storage medium, and when the computer program is executed by a processor, the steps of the stereo calibration method for a binocular camera in the above-mentioned embodiments are implemented.
  • the computer-readable storage medium may be ROM, RAM, CD-ROM, magnetic tape, floppy disk, optical data storage device, and the like.
  • the computer-readable storage medium mentioned in the embodiments of the present application may be a non-volatile storage medium, in other words, may be a non-transitory storage medium.
  • a computer program product containing instructions, which, when executed on a computer, cause the computer to perform the steps of the above-described stereo calibration method for a binocular camera.
  • references herein to "at least one” refers to one or more, and “plurality” refers to two or more.
  • “/” means or means, for example, A/B can mean A or B;
  • "and/or” in this document is only an association that describes an associated object Relation, it means that there can be three kinds of relations, for example, A and/or B can mean that A exists alone, A and B exist at the same time, and B exists alone.
  • words such as “first” and “second” are used to distinguish the same or similar items with basically the same function and effect. Those skilled in the art can understand that the words “first”, “second” and the like do not limit the quantity and execution order, and the words “first”, “second” and the like are not necessarily different.

Landscapes

  • Engineering & Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Radiation Pyrometers (AREA)
  • Length Measuring Devices By Optical Means (AREA)
  • Image Processing (AREA)
  • Studio Devices (AREA)

Abstract

本申请实施例公开了一种双目相机的立体标定方法、装置、系统及双目相机,属于计算机视觉技术领域。本方案中标定装置通过反向补光、漫反射面反射光线和热量,且保证散热,使标定板上的光线和热量均匀稳定,这样拍摄得到的可见光图像和热红外图像清晰。本方案中的立体标定方法,先将两台相机的成像规格统一,这样后续的立体标定能够准确有效。另外,在本方案中还对沿光轴方向的平移分量作减小处理,来确定两台相机各自的旋转量,后续基于确定的旋转量旋转校正图像后,能够保留更多的图像,保证了图像的可用性,也即保证了立体标定的可靠性。在将该立体标定方法结合该标定装置的情况下,立体标定的精准度更高。

Description

双目相机的立体标定方法、装置、系统及双目相机
本申请实施例要求于2020年12月18日提交的申请号为202011510044.0、发明名称为“双目相机的立体标定方法、装置、系统及双目相机”的中国专利申请的优先权,其全部内容通过引用结合在本申请实施例中。
技术领域
本申请实施例涉及计算机视觉技术领域,特别涉及一种双目相机的立体标定方法、装置、系统及双目相机。
背景技术
当前,可见光相机和热红外相机组成的双目相机在测温、测距等需要计算图像深度的场景中得到应用。这种双目相机是一种异源、异构的双目相机。其中,异源是指组成双目的两台相机的信号来源不同,可见光相机的信号来源是可见光,热红外相机的信号来源是热量,异构是指组成双目的两台相机的分辨率、焦距、像元尺寸、视场角等物理结构不完全相同。
由于可见光相机和热红外相机组成的双目相机为异源、异构的双目相机,可见光相机和热红外相机的物理结构差异较大,标定困难重重,后续采用异源、异构的双目相机进行测距、测温的时候也会影响到准确性,因此,需要提出一种针对可见光与热红外双目相机的立体标定方法。
发明内容
本申请实施例提供了一种双目相机的立体标定方法、装置、系统及双目相机,有效针对可见光与热红外相机的立体标定,且保证了立体标定的精准度。所述技术方案如下:
一方面,提供了一种双目相机的立体标定方法,所述双目相机包括可见光相机和热红外相机,所述方法包括:
对多对初始图像进行处理,得到成像规格统一的多对第一图像,所述多对初始图像中的每对初始图像包括同一物体的可见光图像和热红外图像;
对所述多对第一图像进行标定点提取,得到所述多对第一图像中提取出的标定点的像素坐标;
根据所述多对第一图像中提取出的标定点的像素坐标,标定所述双目相机的外参,所述外参包括所述可见光相机和所述热红外相机之间的平移矩阵和旋转矩阵;
将所述外参包括的平移矩阵中表示沿光轴方向的平移分量作减小处理,得到调整后的平移矩阵,根据所述旋转矩阵和所述调整后的平移矩阵,标定所述可见光相机和所述热红外相机各自的旋转量。
可选地,所述对所述多对第一图像进行标定点提取,得到所述多对第一图像中提取出的标定点的像素坐标,包括:
基于二值化处理的方式,提取所述多对第一图像中每个第一图像中的轮廓;
对于所述多对第一图像中的任一第一图像,根据所述任一第一图像中的轮廓的像素坐标,确定所述任一第一图像中提取出的标定点的像素坐标。
可选地,所述多对第一图像中的每个第一图像中待提取的标定点是等间距均匀分布的;
所述根据所述任一第一图像中的轮廓的像素坐标,确定所述任一第一图像中提取出的标定点的像素坐标,包括:
将所述任一第一图像中相距在距离阈值之内,且数量不少于轮廓阈值的多个轮廓,组成轮廓族,得到所述任一第一图像中的多个轮廓族;
对于所述任一第一图像中的任一轮廓族,从所述任一第一图像中的多个轮廓族中,获取与所述任一轮廓族距离最近的目标轮廓族,计算所述任一轮廓族的中心与所述目标轮廓族的中心之间的像素横坐标差值和像素纵坐标差值,得到所述任一轮廓族对应的像素横坐标差值和像素纵坐标差值,所述像素横坐标差值和所述像素纵坐标差值不小于零;
计算所述任一第一图像中每个轮廓族对应的像素横坐标差值和像素纵坐标差值的和,得到所述任一第一图像中相应轮廓族对应的标定间距;
将所述任一第一图像中对应的标定间距与参考间距之间的差距超过间距阈值的轮廓族剔除,将剩余的每个轮廓族的中心确定为提取出的一个标定点,将剩余的每个轮廓族的中心像素坐标确定为提取出的一个标定点的像素坐标。
可选地,所述多对第一图像中的每对第一图像包括的可见光图像和热红外图像的分辨率统一;
所述对多对初始图像进行处理,得到成像规格统一的多对第一图像,包括:
根据所述热红外相机对应的上采样倍数,对所述多对初始图像中的热红外图像进行上采样,得到所述多对第一图像中的热红外图像;
根据所述可见光相机对应的裁剪区域,对所述多对初始图像中的可见光图像进行裁剪,得到所述多对第一图像中的可见光图像。
可选地,所述对多对初始图像进行处理,得到成像规格统一的多对第一图像之前,还包括:
根据同一热源在一对或多对第二图像中的像素尺寸关系,确定所述热红外相机对应的上采样倍数,所述一对或多对第二图像中的每对第二图像包括同一热源的可见光图像和热红外图像;
根据所述热红外相机的原分辨率和所述上采样倍数,以及同一热源在一对或多对第三图像中对应的像素坐标之间的差值关系,确定所述可见光相机对应的裁剪区域,所述一对或多对第三图像中的每对第三图像包括同一热源的可见光图像和热红外图像。
可选地,所述根据同一热源在一对或多对第二图像中的像素尺寸关系,确定所述热红外相机对应的上采样倍数,包括:
计算同一热源在所述一对或多对第二图像中的每对第二图像中的像素长度之间的比值,和/或,计算同一热源在所述一对或多对第二图像中的每对第二图像中的像素面积之间的比值,得到至少一个缩放比值;
根据所述至少一个缩放比值,确定所述热红外相机对应的上采样倍数。
可选地,所述根据所述热红外相机的分辨率和所述上采样倍数,以及同一热源在一对或多对第三图像中对应的像素坐标之间的差值关系,确定所述可见光相机对应的裁剪区域,包括:
对于所述一对或多对第三图像中的任一对第三图像,分别计算同一热源在所述任一对第三图像中的像素横坐标之间的差值和像素纵坐标之间的差值,得到所述任一对第三图像对应的横坐标差值和纵坐标差值;
根据所述一对或多对第三图像对应的横坐标差值,得到所述可见光相机和所述热红外相机之间的横坐标偏置,根据所述一对或多对第三图像对应的纵坐标差值,得到所述可见光相机与所述热红外相机之间的纵坐标偏置;
根据所述热红外相机的原水平分辨率和所述上采样倍数,确定统一后的水平分辨率,根据所述热红外相机的原竖直分辨率和所述上采样倍数,确定统一 后的竖直分辨率;
根据所述可见光相机与所述热红外相机之间的横坐标偏置和纵坐标偏置,以及所述统一后的水平分辨率和所述统一后的竖直分辨率,确定所述可见光相机对应的裁剪区域。
另一方面,提供了一种双目相机的立体标定装置,所述装置包括:
规格统一模块,用于对多对初始图像进行处理,得到成像规格统一的多对第一图像,所述多对初始图像中的每对初始图像包括同一物体的可见光图像和热红外图像;
标定点提取模块,用于对所述多对第一图像进行标定点提取,得到所述多对第一图像中提取出的标定点的像素坐标;
外参标定模块,用于根据所述多对第一图像中提取出的标定点的像素坐标,标定所述双目相机的外参,所述外参包括所述可见光相机和所述热红外相机之间的平移矩阵和旋转矩阵;
立体校正模块,用于将所述外参包括的平移矩阵中表示沿光轴方向的平移分量作减小处理,得到调整后的平移矩阵,根据所述旋转矩阵和所述调整后的平移矩阵,标定所述可见光相机和所述热红外相机各自的旋转量。
可选地,所述标定点提取模块包括:
轮廓提取单元,用于基于二值化处理的方式,提取所述多对第一图像中每个第一图像中的轮廓;
标定点确定单元,用于对于所述多对第一图像中的任一第一图像,根据所述任一第一图像中的轮廓的像素坐标,确定所述任一第一图像中提取出的标定点的像素坐标。
可选地,所述多对第一图像中的每个第一图像中待提取的标定点是等间距均匀分布的;
所述标定点确定单元包括:
第一处理子单元,用于将所述任一第一图像中相距在距离阈值之内,且数量不少于轮廓阈值的多个轮廓,组成轮廓族,得到所述任一第一图像中的多个轮廓族;
第二处理子单元,用于对于所述任一第一图像中的任一轮廓族,从所述任一第一图像中的多个轮廓族中,获取与所述任一轮廓族距离最近的目标轮廓族, 计算所述任一轮廓族的中心与所述目标轮廓族的中心之间的像素横坐标差值和像素纵坐标差值,得到所述任一轮廓族对应的像素横坐标差值和像素纵坐标差值,所述像素横坐标差值和所述像素纵坐标差值不小于零;
第三处理子单元,用于计算所述任一第一图像中每个轮廓族对应的像素横坐标差值和像素纵坐标差值的和,得到所述任一第一图像中相应轮廓族对应的标定间距;
第四处理子单元,用于将所述任一第一图像中对应的标定间距与参考间距之间的差距超过间距阈值的轮廓族剔除,将剩余的每个轮廓族的中心确定为提取出的一个标定点,将剩余的每个轮廓族的中心像素坐标确定为提取出的一个标定点的像素坐标。
可选地,所述多对第一图像中的每对第一图像包括的可见光图像和热红外图像的分辨率统一;
所述规格统一模块包括:
缩放单元,用于根据所述热红外相机对应的上采样倍数,对所述多对初始图像中的热红外图像进行上采样,得到所述多对第一图像中的热红外图像;
裁剪单元,用于根据所述可见光相机对应的裁剪区域,对所述多对初始图像中的可见光图像进行裁剪,得到所述多对第一图像中的可见光图像。
可选地,所述装置还包括:
缩放参数确定模块,用于根据同一热源在一对或多对第二图像中的像素尺寸关系,确定所述热红外相机对应的上采样倍数,所述一对或多对第二图像中的每对第二图像包括同一热源的可见光图像和热红外图像;
裁剪区域确定模块,用于根据所述热红外相机的原分辨率和所述上采样倍数,以及同一热源在一对或多对第三图像中对应的像素坐标之间的差值关系,确定所述可见光相机对应的裁剪区域,所述一对或多对第三图像中的每对第三图像包括同一热源的可见光图像和热红外图像。
可选地,所述缩放参数确定模块包括:
比值计算单元,用于计算同一热源在所述一对或多对第二图像中的每对第二图像中的像素长度之间的比值,和/或,计算同一热源在所述一对或多对第二图像中的每对第二图像中的像素面积之间的比值,得到至少一个缩放比值;
参数确定单元,用于根据所述至少一个缩放比值,确定所述热红外相机对应的上采样倍数。
可选地,所述裁剪区域确定模块包括:
坐标差值计算单元,用于对于所述一对或多对第三图像中的任一对第三图像,分别计算同一热源在所述任一对第三图像中的像素横坐标之间的差值和像素纵坐标之间的差值,得到所述任一对第三图像对应的横坐标差值和纵坐标差值;
偏置确定单元,用于根据所述一对或多对第三图像对应的横坐标差值,得到所述可见光相机和所述热红外相机之间的横坐标偏置,根据所述一对或多对第三图像对应的纵坐标差值,得到所述可见光相机与所述热红外相机之间的纵坐标偏置;
分辨率确定单元,用于根据所述热红外相机的原水平分辨率和所述上采样倍数,确定统一后的水平分辨率,根据所述热红外相机的原竖直分辨率和所述上采样倍数,确定统一后的竖直分辨率;
裁剪区域确定单元,用于根据所述可见光相机与所述热红外相机之间的横坐标偏置和纵坐标偏置,以及所述统一后的水平分辨率和所述统一后的竖直分辨率,确定所述可见光相机对应的裁剪区域。
另一方面,提供了一种标定装置,该标定装置用于实现上述所述的双目相机的立体标定方法中图像的采集;
所述标定装置包括标定板和补光补热装置;
所述标定板为金属板,所述标定板上等间距均匀分布有孔洞,所述孔洞的孔壁具有倾斜角度;
所述补光补热装置包括补光装置、反射板和补热装置,所述补光装置固定在所述标定板的背面,所述反射板固定在与所述标定板的背面相距一个散热距离的位置,所述补热装置固定在所述反射板的背面;
所述补光装置用于向所述反射板发出光线,所述补热装置用于发出热量,所述反射板的反射面为漫反射面,所述反射板用于通过所述漫反射面向所述标定板反射所述光线,以及向所述标定板传递热量。
可选地,所述标定装置为单板标定装置,所述单板标定装置包括一个标定板和一套补光补热装置;或者,
所述标定装置为第一组合板标定装置,所述第一组合板标定装置包括多个标定板和多套补光补热装置,所述多个标定板和所述多套补光补热装置一一对 应,所述多个标定板的位姿不同;或者,
所述标定装置为第二组合板标定装置,所述第二组合板标定装置包括所述多个标定板和一套补光补热装置,所述多个标定板的位姿不同。
另一方面,提供了一种双目相机的立体标定系统,所述立体标定系统包括所述双目相机和处理器;
所述双目相机包括可见光相机和热红外相机,所述可见光相机和所述热红外相机拍摄同一物体得到的可见光图像和热红外图像作为一对初始图像;
所述处理器,用于对多对初始图像进行处理,得到成像规格统一的多对第一图像;
所述处理器,还用于对所述多对第一图像进行标定点提取,得到所述多对第一图像中提取出的标定点的像素坐标;
所述处理器,还用于根据所述多对第一图像中提取出的标定点的像素坐标,标定所述双目相机的外参,所述外参包括所述可见光相机和所述热红外相机之间的平移矩阵和旋转矩阵;
所述处理器,还用于将所述外参包括的平移矩阵中表示沿光轴方向的平移分量作减小处理,得到调整后的平移矩阵,根据所述旋转矩阵和所述调整后的平移矩阵,标定所述可见光相机和所述热红外相机各自的旋转量。
另一方面,提供了一种双目相机,所述双目相机根据上述所述的立体标定方法标定得到。可选地,所述双目相机包括可见光相机和热红外相机。
另一方面,提供了一种计算机设备,所述计算机设备包括处理器、通信接口、存储器和通信总线,所述处理器、所述通信接口和所述存储器通过所述通信总线完成相互间的通信,所述存储器用于存放计算机程序,所述处理器用于执行所述存储器上所存放的程序,以实现上述所述双目相机的立体标定方法的步骤。
另一方面,提供了一种计算机可读存储介质,所述计算机可读存储介质内存储有计算机程序,所述计算机程序被处理器执行时实现上述所述双目相机的立体标定方法的步骤。
另一方面,提供了一种包含指令的计算机程序产品,当其在计算机上运行时,使得计算机执行上述所述的双目相机的立体标定方法的步骤。
本申请实施例提供的技术方案至少可以带来以下有益效果:
本申请实施例提供了一种标定装置,该标定装置通过反向补光、漫反射面均匀地反射光线和热量,且保证散热,使得标定板上的光线和热量均匀稳定,这样,双目相机拍摄得到的可见光图像和热红外图像清晰,图像质量很高。本申请实施例还提供了一种双目相机的立体标定方法,先将可见光相机和热红外相机的成像规格统一,这样在两台相机的成像规格统一的前提下进行后续的立体标定,立体标定能够准确有效。另外,在本方案中还考虑到可见光相机和热红外相机的物理结构存在较大差异,所以对沿光轴方向的平移分量作减小处理,在此基础上,确定两台相机各自的旋转量。这样,基于两台相机的旋转量旋转校正图像后,能够保留更多的图像,保证了图像的可用性,也即保证了立体标定的可靠性。在将该立体标定方法结合使用本方案设计的标定装置的情况下,立体标定的精准度更高。
附图说明
为了更清楚地说明本申请实施例中的技术方案,下面将对实施例描述中所需要使用的附图作简单地介绍,显而易见地,下面描述中的附图仅仅是本申请的一些实施例,对于本领域普通技术人员来讲,在不付出创造性劳动的前提下,还可以根据这些附图获得其他的附图。
图1是本申请实施例提供的一种标定板的板面示意图;
图2是本申请实施例提供的一种标定板的孔截面示意图;
图3是本申请实施例提供的一种标定装置的示意图;
图4是本申请实施例提供的一种组合板的板面示意图;
图5是本申请实施例提供的一种双目相机的立体标定方法的流程图;
图6是本申请实施例提供的另一种双目相机的立体标定方法的流程图;
图7是本申请实施例提供的一种双目相机的立体标定装置的结构示意图;
图8是本申请实施例提供的一种计算机设备的结构示意图。
具体实施方式
为使本申请实施例的目的、技术方案和优点更加清楚,下面将结合附图对本申请实施方式作进一步地详细描述。
为了便于理解本申请实施例提供的双目相机的立体标定方法,首先对本申请实施例中涉及的部分名词进行解释。
异源、异构双目:异源双目是指组成双目的两台相机的信号来源不同,异构双目是指组成双目的两台相机的分辨率、焦距、像元尺寸、视场角等物理结构不完全相同。例如,本申请实施例中介绍的由可见光相机和热红外相机组成的双目相机是一种异源、异构的双目相机。其中,可见光相机的信号来源是可见光,热红外相机的信号来源是热量,可见光相机和热红外相机的物理结构差异较大。
立体标定:例如标定两台相机的内参、外参,标定两台相机各自的旋转量等等。其中,内参包括光心、焦距、畸变矩阵等参数,外参包括两台相机之间的平移矩阵和旋转矩阵,外参用于表示两台相机之间的平移和旋转关系。立体标定的结果可以将两台相机的图像校正为极线对齐的状态,也即将两台相机的图像平面校正成共面行对准,便于进行后续的立体匹配。
当前,双目相机被广泛应用于测温测距等场景中,无论是用于测温还是测距,需要基于视差求距离公式来计算被拍摄的物体与双目相机之间的距离,视差求距离公式是在双目相机处于理想情况下推导出来的,也即两台相机的两个图像平面是共面行对准。其中,共面行对准是指:两台相机的图像平面在同一水平面上,且同一实物点投影到两个图像平面上时,应该在两个像素坐标系的同一行。而在实际的双目相机中,是不存在完全的共面行对准的两个图像平面的,所以需要对双目相机进行立体标定,立体标定的目标是将实际中非共面行对准的两个图像平面,校正成共面行对准,也即将实际的双目相机校正为理想的双目相机。
本申请实施例通过拍摄标定装置的图像,来进行可见光与热红外双目相机的立体标定。为了能够让可见光相机和热红外相机同时拍摄到清晰的图像,提高立体标定的精度,本申请实施例首先设计了一种标定装置,接下来对该标定装置进行介绍。
在本申请实施例中,标定装置包括标定板和补光补热装置。
首先对标定板进行介绍,在本申请实施例中,标定板为金属板,标定板上等间距均匀分布有孔洞,孔洞的孔壁具有倾斜角度。可选地,标定板上的孔洞为圆孔。可选地,孔壁的倾斜角度为0°至90°内的任一角度,例如15°、30°、45°、50°等。需要说明的是,本申请实施例不限定标定板的板厚、标定板上孔洞的数量、孔洞的形状以及孔洞的倾斜角度。
示例性地,图1是本申请实施例提供的一种标定板的板面示意图,图2是本申请实施例提供的一种标定板的孔截面示意图。假设图1和图2所示为同一个标定板,该标定板的板厚为5mm(毫米),参见图1,该标定板上均匀分布有9*9的圆孔,参见图2,该标定板上的圆孔具有倾斜角度,倾斜角度为15°。图1为标定板的正视图,表示标定板的正面,也即双目相机需要拍摄的一面,图2为标定板的右视的截面图,白色表示孔洞的截面,可以看出标定板的孔洞具有倾斜角度,标定板正面的孔径小于背面的孔径。
接下来对补光补热装置进行介绍,在本申请实施例中,补光补热装置包括补光装置、反射板和补热装置,补光装置固定在标定板的背面,反射板固定在与标定板的背面相距一个散热距离的位置,补热装置固定在反射板的背面。其中,补光装置用于向反射板发出光线,补热装置用于发出热量,反射板的反射面为漫反射面,反射板用于通过漫反射面向标定板反射光线,以及向标定板传递热量。
可选地,补光装置固定在标定板的背面靠近边缘的位置,也即固定在最外排孔洞与边缘之间的位置,以避免补光装置对光线和热量的影响,减少图像中的阴影。可选地,散热距离根据加热装置的加热功率设置,例如,加热功率较大,散热距离可以设置大一些,加热功率较小,散热距离可以设置小一些,使得传递到标定板的热量和标定板上散失的热量相对均衡。可选地,补光装置为补光灯,加热装置为加热贴片。
图3是本申请实施例提供的一种标定装置的示意图。参见图3,左侧的板为标定板,标定板的背面固定有补光灯,右侧的板为反射板,反射板的反射面加工成漫反射面,也即反射板左侧的面为漫反射面,漫反射面可以均匀地反射光线。反射板的背面固定有加热贴片,加热贴片散发热量,通过反射板均匀、高能量地向标定板传递热量,使标定板上的光线和热量均匀稳定。
需要说明的是,本申请实施例通过向后补光的设计,反射板可以均匀、高 能量地向标定板发出光线和热量,保证了可见光相机和热红外相机的成像清晰,且避免了同时向前补光和补热时,设备互相遮挡导致光线和热量不均匀的情况,保证了成像均匀。
在本申请实施例中,标定装置为单板标定装置,单板标定装置包括一个标定板和一套补光补热装置,如图3所示的标定装置。在通过拍摄单板标定装置的图像进行双目相机的立体标定时,需要通过调整该单板标定装置的位姿,以拍摄得到不同位姿的图像,用于立体标定。
为了进一步提高标定的效率,本申请实施例还设计了组合板标定装置,使用多个位姿不同的标定板排放在一起得到一个组合板,基于组合板设计出组合板标定装置。这样,一次拍摄即能获得多个处于不同位姿的标定板的图像,拍摄过程中无需调整标定装置的位姿,提高了效率。接下来介绍本申请实施例提供的两种组合板标定装置。
可选地,标定装置为第一组合板标定装置,第一组合板标定装置包括多个标定板和多套补光补热装置,该多个标定板和多套补光补热装置一一对应,该多个标定板的位姿不同。示例性地,将多个单板标定装置排放在一起,形成一个组合板标定装置,该多个单板标定装置的位姿不同,也即调整该多个单板标定装置的角度,使该多个单板标定装置彼此之间形成一定的夹角。
可选地,标定装置为第二组合板标定装置,第二组合板标定装置包括多个标定板和一套补光补热装置,多个标定板的位姿不同。也即是,将不同位姿的标定板排放在一起得到一个组合板,在该组合板的背面固定一套补光装置,距离组合板一个散热距离的位置固定一个反射板,反射板的反射板为漫反射面,反射板的背面固定一个加热装置。其中,补光装置固定在组合板上靠近边缘的位置,以避免补光装置对光线和热量的影响。组合板标定装置中反射板的尺寸与组合板的尺寸匹配,加热装置也要与组合标定板的尺寸匹配,例如加热贴片的尺寸与组合板的尺寸匹配。
需要说明的是,本申请实施例不限定组合标定装置包括的标定板的数量,标定板的数量可以根据标定板的尺寸、双目相机的视场角等调整。图4是本申请实施例提供的一种组合板的板面示意图。该组合板包括4个如图1所示的标定板,各个标定板之间形成有一定的夹角,也即各个标定板的位姿不同。
综上可知,本申请实施例提供了一种标定装置,该标定装置用于实现双目相机的立体标定方法中图像的采集。设计的标定装置通过反向补光、漫反射面 均匀、高能量的反射光线和热量,且保证散热,使得标定板上的光线和热量均匀稳定,这样,双目相机拍摄得到的可见光图像和热红外图像清晰,也即图像质量很高,有利于提高双目相机的标定准确度。
接下来对本申请实施例提供的双目相机的立体标定方法进行详细的解释说明。
图5是本申请实施例提供的一种双目相机的立体标定方法的流程图。该方法应用于标定设备,可选地,该标定设备为双目相机或计算机设备。请参考图5,该方法包括如下步骤。
步骤501:对多对初始图像进行处理,得到成像规格统一的多对第一图像,该多对初始图像中的每对初始图像包括同一物体的可见光图像和热红外图像。
本申请实施例中的双目相机包括可见光相机和热红外相机,该双目相机为异源、异构的双目相机,可见光相机与热红外相机的成像规格不同,例如两台相机成像的分辨率、焦距等存在不同,这种情况下不能直接进行立体标定,需要将两台相机的成像规格进行统一。
可选地,在申请实施例中,成像规格统一包括分辨率统一,也即每对第一图像包括的可见光图像和热红外图像的分辨率统一。另外,由于是根据同一物体的可见光图像和热红外图像,来统一两台相机成像的分辨率的,可以想象对于两台相机拍摄同一物体来说,分辨率统一意味着两台相机的焦距也统一,也即是,本方案在统一两台相机的分辨率的同时,其实也统一了两台相机的焦距。
在本申请实施例中,通过对图像的裁剪缩放,将可见光相机和热红外相机的成像规格转换为一致,也即统一分辨率和焦距,基于裁剪缩放后的图像进行后续的立体标定。
其中,该多对初始图像中的每对初始图像包括同一物体的可见光图像和热红外图像,每对初始图像为可见光相机和热红外相机同时拍摄同一物体的图像。双目相机的拍摄对象可以为前述介绍的标定装置,也即初始图像为标定装置的图像。也即是,在标定装置所包括的补光补热装置启用的情况下,双目相机通过拍摄标定装置所包括的标定板以得到初始图像。
需要说明的是,热红外相机的原分辨率通常小于可见光相机的原分辨率,可见光相机的视场角大于热红外相机的视场角,在这种情况下,本申请实施例通过将热红外图像进行上采样,以放大热红外图像,将可见光图像进行裁剪, 以缩小可见光图像,使得两台相机的分辨率和焦距统一。在另一些实施例中,也可以通过将可见光图像进行裁剪和下采样,以缩小可见光图像,使得两台相机的分辨率和焦距统一。在其他一些实施例中,如果热红外相机的原分辨率大于可见光相机的原分辨率,热红外相机的视场角大于可见光相机的视场角,这种情况下,可以通过将热红外图像进行裁剪和下采样,或者,将可见光图像进行上采样,以及将热红外图像进行裁剪,使得两台相机的分辨率和焦距统一。
可选地,在本申请实施例中,以放大热红外图像和裁剪可见光图像,使得两台相机的分辨率和焦距统一为例进行介绍。对多对初始图像进行裁剪缩放处理,得到成像规格统一的多对第一图像的一种实现方式为:根据热红外相机对应的上采样倍数,对该多对初始图像中的热红外图像进行上采样,得到该多对第一图像中的热红外图像;根据可见光相机对应的裁剪区域,对该多对初始图像中的可见光图像进行裁剪,得到该多对第一图像中的可见光图像。
示例性地,根据热红外相机对应的上采样倍数,对该多对初始图像中的热红外图像进行上采样的一种实现方式为:将该多对初始图像中的热红外图像的像素横坐标和像素纵坐标均乘以上采样倍数,以扩大热红外图像,使得到的多对第二图像中的热红外图像的分辨率为两台相机统一后的分辨率。例如,上采样倍数为scale,W ir和H ir分别指热红外相机的原水平分辨率和原竖直分辨率,W=W ir×scale和H=H ir×scale分别表示统一后的水平分辨率和竖直分辨率。
示例性地,在本申请实施例中,裁剪区域用裁剪矩形范围参数表示,裁剪矩形范围参数如((Δw,Δw+W),(Δh,Δh+H)),其中,(Δw,Δw+W)和(Δh,Δh+H)用于确定裁剪所需保留的矩形区域。也即是,保留像素横坐标在Δw到Δw+W之间的像素点,以及保留像素纵坐标在Δh到Δh+H之间的像素点。
需要说明的是,在对多对初始图像进行裁剪缩放处理之前,需要确定裁剪缩放参数。接下来对本申请实施例提供的一种裁剪缩放参数的确定方法进行介绍。其中,裁剪缩放参数包括热红外相机对应的上采样倍数和可见光相机对应的裁剪区域。
在本申请实施例中,确定裁剪缩放参数的一种实现方式为:根据同一热源在一对或多对第二图像中的像素尺寸关系,确定热红外相机对应的上采样倍数;根据热红外相机的原分辨率和上采样倍数,以及同一热源在一对或多对第三图像中对应的像素坐标之间的差值关系,确定可见光相机对应的裁剪区域。
其中,该一对或多对第二图像中的每对第二图像包括同一热源的可见光图 像和热红外图像,该一对或多对第三图像中的每对第三图像包括同一热源的可见光图像和热红外图像。可选地,第二图像和第三图像中的热源不同。或者,第二图像和第三图像中的热源相同,也即第二图像和第三图像为拍摄同一热源得到的图像,该热源可以为前述介绍的标定装置。
需要说明的是,对于第三图像,使用可见光相机和热红外相机同时拍摄放置于远处的同一热源,得到一对或多对第三图像,该热源与双目相机之间的距离,应尽量使两台相机的视差较小,例如小于视差阈值,视差阈值尽量设置接近于零。本申请实施例中,也无需设置视差阈值,将热源放置较远处即可,例如距离双目相机20米、25米或30米等均可。对于第二图像,使用可见光相机和热红外相机同时拍摄一个固定长度或固定面积的热源,保证能够通过拍摄得到的图像计算出该热源在图像中的像素长度或像素面积即可,而不需要将热源放置在较远处。
这里解释将热源放置在远处以拍摄得到第三图像的原因:由于根据公式(1)所示的视差求距离公式(可以称为测距公式),在两台相机的基线距baseline一定、焦距f相同的情况下,若将热源放置在无穷远处,也即距离depth为无穷大时,两台相机的视差disp接近于零。而由于两台相机的像元尺寸不同,每对第二图像中同一热源对应的像素横坐标之间的差值的非零的,这样得到的视差是非零的。因此,为了统一两台相机的分辨率和焦距,在保证可见光相机和热红外相机能够拍摄到热源的情况下,尽量使热源放置在较远处,此时记录该热源在两台相机拍摄的一对图像中对应的像素坐标之间的差值关系。该差值关系表征了两台相机的像素坐标之间的偏置,在焦距一致,拍摄对象放置在远处的前提下,根据偏置来处理可见光图像,使得处理后的成对的可见光图像和热红外图像的像素坐标之间的差值为零,以消除两台相机之间的视差,对于纵坐标做类似处理,使得可见光图像与热红外图像的水平分辨率和竖直分辨率均一致。
Figure PCTCN2021139325-appb-000001
接下来介绍确定热红外相机对应的上采样倍数的一种实现方式。基于前述介绍,在本申请实施例中,计算同一热源在该一对或多对第二图像中的每对第二图像中的像素长度之间的比值,和/或,计算同一热源在一对或多对第二图像中的每对第二图像中的像素面积之间的比值,得到至少一个缩放比值。之后,根据该至少一个缩放比值,确定热红外相机对应的上采样倍数。简单来说,本 申请实施例基于同一物体在可见光图像和热红外图像中像素长度和/或像素面积的比值,来确定热红外图像对应的上采样倍数。其中,像素长度之间的比值,和/或,像素面积之间的比值,均能够用于表示可见光图像和热红外图像对的像素尺寸关系。
示例性地,若以基于像素长度为例,需要热源具有一定的长度,对于任一对第二图像,计算该热源在可见光图像中的像素长度比上该热源在热红外图像中的像素长度,得到该对第二图像对应的一个缩放比值,这样,对于一对或多对第二图像,即可对应得到一个或多个缩放比值。若以基于像素面积为例,需要热源具有一定的长度和宽度,对于任一对第二图像,计算该热源在可见光图像中的像素面积比上该热源在热红外图像中的像素面积,得到该对第二图像对应的一个缩放比值,这样,对于一对或多对第二图像,也可对应得到一个或多个缩放比值。若以基于像素长度和像素面积为例,也需要热源具有一定的长度和宽度,对于任一对第二图像,计算该热源在可见光图像中的像素长度比上该热源在热红外图像中的像素长度,得到该对第二图像对应的一个缩放比值,计算该热源在可见光图像中的像素面积比上该热源在热红外图像中的像素面积,也得到该对第二图像对应的一个缩放比值,这样,一对第二图像对应得到两个缩放比值,可选地,取这两个缩放比值的平均值,作为该对图像最终对应的一个缩放比值。
在基于像素长度和/或像素面积计算比值之后,如果对于全部的第二图像,最终得到一个缩放比值,将该缩放比值确定为上采样倍数。如果对于全部的第二图像,最终得到多个缩放比值,可以将该多个缩放比值的平均值确定为上采样倍数。
接下来介绍确定可见光相机对应的裁剪区域的一种实现方式。基于前述介绍,在本申请实施例中,根据热红外相机的分辨率和上采样倍数,以及同一热源在一对或多对第三图像中对应的像素坐标之间的差值关系,确定可见光相机对应的裁剪区域的一种实现方式为:对于该一对或多对第三图像中的任一对第三图像,分别计算同一热源在该对第三图像中的像素横坐标之间的差值和像素纵坐标之间的差值,得到该对第三图像对应的横坐标差值和纵坐标差值。根据该一对或多对第三图像对应的横坐标差值,得到可见光相机和热红外相机之间的横坐标偏置,根据该一对或多对第三图像对应的纵坐标差值,得到可见光相机与热红外相机之间的纵坐标偏置。之后,根据热红外相机的原水平分辨率和 上采样倍数,确定统一后的水平分辨率,根据热红外相机的原竖直分辨率和上采样倍数,确定统一后的竖直分辨率。然后,根据可见光相机与热红外相机之间的横坐标偏置和纵坐标偏置,以及统一后的水平分辨率和统一后的竖直分辨率,确定可见光相机对应的裁剪区域。其中,以横坐标差值和纵坐标差值来表示差值关系。
简单来说,分别基于同一热源在两台相机中像素横坐标的差值、像素纵坐标的差值,确定两台相机在像素横坐标方向的偏置、在像素纵坐标方向上的偏置,即确定横坐标偏置和纵坐标偏置。之后,基于横坐标偏置和纵坐标偏置,以及统一后的分辨率,确定裁剪区域,使得根据该裁剪区域裁剪后保留的可见光图像的分辨率即为统一后的分辨率。而保证分辨率一致,也意味着焦距也一致,这样对于远处的一个热源,在该焦距下,两台相机计算出来的视差接近于零。
示例性地,以基于一对第三图像确定裁剪区域为例,计算同一热源在该对第三图像中的像素横坐标之间的差值,得到一个横坐标差值Δw,将该横坐标差值Δw确定为两台相机之间的横坐标偏置。同样地,计算该热源在该对第三图像中的像素纵坐标之间的差值,得到一个纵坐标差值Δh,将该纵坐标差值Δh确定为两台相机之间的纵坐标偏置。假设统一后的水平分辨率和竖直分辨率分别为W和H,那么将((Δw,Δw+W),(Δh,Δh+H))确定为可见光相机对应的裁剪区域。
需要说明的是,由于拍摄得到第三图像时,热源距离双目相机的距离较远,因此,该热源在第三图像中可以视为一个点,该热源的像素坐标也即可以用图像中该热源所在点的像素坐标表示。如果该热源在第三图像中仍存在一定的长度或体积,那么可以用该热源上的一个标定点的像素坐标作为该热源的像素坐标。
以上介绍了对多对初始图像进行裁剪缩放处理,得到成像规格统一的多对第一图像的实现方式。其中,初始图像可以为标定装置的图像,由前述对标定装置的介绍可知,标定装置为单板标定装置或组合板标定装置,如果标定装置为单板标定装置,则初始图像为包括一个标定板的图像,第一图像也为包括一个标定板的图像,这种情况下可以继续执行步骤502。
如果标定装置为组合板标定装置,则一个初始图像为包括多个标定板的图像,也即初始图像为相机直接拍摄得到的组合板图像,对初始图像处理后得到的第一图像也为包括多个标定板的图像。这种情况下,得到多对第一图像之后, 需要将每对第一图像包括的可见光图像和热红外图像分别进行裁剪,得到多个位姿下的标定板的可见光和热红外图像对,裁剪得到的每对图像对应多个标定板中的一个标定板。这样,将裁剪后得到的多对图像作为裁剪后的多对第一图像,此时的第一图像为单板图像,基于裁剪后的多对第一图像继续执行步骤502。示例性地,如图4所示的组合板,裁剪一对第一图像可以得到4对可见光和热红外图像对,这4对图像分别对应组合板包括的4个标定板,将这4对图像作为裁剪后的4对第一图像,基于这4对第一图像继续执行步骤502。
或者,如果标定装置为组合板标定装置,将两台相机拍摄直接得到的图像作为多对原始图像,也即原始图像为组合板图像。先对多对原始图像进行裁剪,得到多对初始图像,也即初始图像是已经基于组合板的图像裁剪得到的单板图像,一个初始图像为包括单个标定板的图像,对初始图像处理后得到的第一图像也为包括单个标定板的图像,也即第一图像已经为单板图像,这种情况下继续执行步骤502。示例性地,如图4所示的组合板,裁剪一对原始图像可以得到4对可见光和热红外图像对,这4对图像分别对应组合板包括的4个标定板,这4对图像作为4对初始图像,对这4对初始图像进行处理,得到4对第一图像,基于这4对第一图像继续执行步骤502。
也即是,对于组合板标定装置来说,对于两台相机直接拍摄得到的组合板图像,可以先对组合板图像进行裁剪,得到单板图像,再对单板图像进行处理,得到第一图像。或者,对于两台相机直接拍摄得到的组合板图像,先对组合板图像进行处理,得到包括多个标定板图像的第一图像,再对得到的第一图像进行裁剪,得到包括单个标定板图像的第一图像。简单来说,对组合板图像进行裁剪得到单板图像的步骤,在执行步骤502之前完成即可。
可选地,组合板标定装置包括的组合板上标注有辅助线,基于辅助线将各个标定板划分在不同的区域,如图4所示的虚线为辅助线,虚线将4个标定板划分在了4个区域。基于此,对组合板图像进行裁剪得到单板图像的一种实现方式为:提取组合板图像中的辅助线,根据提取出的辅助线对组合板图像进行裁剪,得到单板图像。需要说明的是,本申请实施例以这种实现方式为例,来说明对组合板图像的裁剪,但这种实现方式并不限定本申请实施例。
在本申请实施例中,对可见光相机和热红外相机的成像规格校正统一的过程,理解为对相机模型进行统一的过程。其中,相机模型可以理解为相机的分辨率、焦距等结构。
步骤502:对该多对第一图像进行标定点提取,得到该多对第一图像中提取出的标定点的像素坐标。
在本申请实施例中,在得到多对第一图像之后,对该多对第一图像进行标定点提取,得到该多对第一图像中提取出的标定点的像素坐标。
需要说明的是,本申请实施例中提供的标定装置并不构成对本申请实施例提供的双目相机的立体标定方法的限定,在双目相机的立体标定方法中,也可以采用其他的标定装置作为双目相机的拍摄对象。通常标定装置上会有较为明显的标定点,标定板的图像中标定点的像素值(灰度值)与标定点周围的像素值存在较为明显的差距,基于此,本申请实施例提供了一种标定点提取的方法,接下来将对此进行介绍。
在本申请实施例中,对该多对第一图像进行标定点提取,得到该多对第一图像中提取出的标定点的像素坐标的一种实现方式为:基于二值化处理的方式,提取该多对第一图像中每个第一图像中的轮廓,对于该多对第一图像中的任一第一图像,根据该任一第一图像中的轮廓的像素坐标,确定该任一第一图像中提取出的标定点的像素坐标。其中,需要提取的轮廓是指第一图像中目标对象的轮廓,待提取的标定点即代表图像中的目标对象。
示例性地,对于任一第一图像,基于二值化阈值对该图像作二值化处理,提取该图像的轮廓,对于本申请实施例提供的标定装置来说,孔洞作为目标对象,孔洞的中心为待提取的标定点,提取的轮廓为该图像中标定板上孔洞的轮廓。提取出该图像中的轮廓之后,对于该图像中的任一轮廓,基于组成该轮廓的多个像素点的像素坐标,计算该轮廓的中心像素坐标,得到该轮廓对应的标定点的像素坐标,也即得到该图像中提取出的一个标定点的像素坐标。
为了提高提取的标定点的准确性,可以基于多个二值化阈值提取第一图像中的轮廓,对于一个标定点,基于多个二值化阈值能够提取出不止一个轮廓,这些轮廓的中心一致,将这些轮廓组成一个轮廓族,计算这个轮廓族的中心像素坐标,即可得到该轮廓族对应的一个标定点的像素坐标。
示例性地,设置二值化阈值的初始值和终止值,设置灰度步长,对于任一第一图像,基于灰度步长将二值化阈值从初始值更新到终止值,也即逐步长地遍历该图像,对该图像进行二值化处理,然后提取轮廓。提取出该图像中的轮廓之后,对于任一轮廓,获取该图像中其他轮廓的中心与该轮廓的中心之间的距离在距离阈值之内的轮廓,与该轮廓组成轮廓族,也即合并距离在距离阈值 之内的轮廓,形成轮廓族。得到该图像中的轮廓族之后,计算任一轮廓族的中心像素坐标,即可得到该轮廓族对应的一个标定点的像素坐标。其中,距离阈值可以为5个像素点,灰度步长可以为10,二值化阈值的初始值和终止值可以分别为100和200。当然,距离阈值、灰度步长、二值化阈值的初始值和终止值也可以设置为其他数值。
在本申请实施例中,该多对第一图像中的每个第一图像中待提取的标定点是等间距均匀分布的,那么理想情况下,任一第一图像中的轮廓或轮廓族也应该是等间距均匀分布的。基于此,为了更进一步提高提取出的标定点的准确性,还可以对图像中的轮廓或轮廓族进行筛选,将明显有偏差的轮廓或轮廓族剔除,基于剩余的轮廓或轮廓族得到提取出的标定点的像素坐标。接下来以对图像中的轮廓族进行筛选为例对此进行介绍。
对于任一第一图像,根据该任一第一图像中的轮廓的像素坐标,确定该任一第一图像中提取出的标定点的像素坐标的一种实现方式为:将该任一第一图像中相距在距离阈值之内,且数量不少于轮廓阈值的多个轮廓,组成轮廓族,得到该任一第一图像中的多个轮廓族。对于该任一第一图像中的任一轮廓族,从该任一第一图像中的多个轮廓族中,获取与该任一轮廓族距离最近的目标轮廓族,计算该任一轮廓族与目标轮廓族之间的像素横坐标差值和像素纵坐标差值,得到该任一轮廓族对应的像素横坐标差值和像素纵坐标差值。计算该任一第一图像中每个轮廓族对应的像素横坐标差值和像素纵坐标差值的和,得到该任一第一图像中相应轮廓族对应的标定间距。将该任一第一图像中对应的标定间距与参考间距之间的差距超过间距阈值的轮廓族剔除,将剩余的每个轮廓族的中心确定为提取出的一个标定点,将剩余的每个轮廓族的中心像素坐标确定为提取出的标定点的像素坐标。其中,像素横坐标差值和像素纵坐标差值不小于零,也即计算的是像素坐标之间差值的绝对值。
示例性地,对于任一第一图像,基于灰度步长th step更新二值化阈值,提取该图像中的轮廓,将该图像中相距在距离阈值之内的轮廓,形成候选轮廓族,也即合并距离在距离阈值th dist之内的轮廓,形成候选轮廓族。遍历该图像中的候选轮廓族,将族内轮廓数量少于轮廓阈值th num的候选轮廓族剔除,将剩余的候选轮廓族确定为该图像中的轮廓族。遍历该图像中的轮廓族,对于任一轮廓族(为了便于描述,后续将该轮廓族称为第一轮廓族),从该图像内的其他轮廓族中找出与第一轮廓族距离最近的目标轮廓族,计算这两个轮廓族的中心对应的 像素横坐标差值和像素纵坐标差值,得到第一轮廓族对应的像素横坐标差值Δu和像素纵坐标差值Δv,Δu和Δv为绝对值。
由于标定点是等间距均匀分布的,如果轮廓族是准确的,无论标定板是何种姿态,各个轮廓族对应的像素横坐标差值和像素纵坐标差值的和总是接近的。也即是,Δu i+Δv i与Δu j+Δv j总是接近的,其中,以i和j分别表示一个图像中的任意两个轮廓族。基于此,计算该图像中每个轮廓族对应的像素横坐标差值Δu和像素纵坐标差值Δv的和,得到相应轮廓族对应的标定间距Δu+Δv。之后,将该图像中对应的标定间距与参考间距之间的差距超过间距阈值的轮廓族剔除,也即是,将对应的标定间距明显有偏差的轮廓族剔除。剩余的每个轮廓族的中心即为最终提取出的一个标定点,剩余的每个轮廓族的中心像素坐标即为最终提取出的一个标定点的像素坐标。
其中,遍历该图像中的轮廓族,每次判断一个轮廓族是否需要剔除。可选地,参考间距为该图像中的轮廓族对应的标定间距的平均值。或者,参考间距为该图像中除第二轮廓族之外的其他轮廓族的标定间距的平均值,第二轮廓族为本次需要判断是否要剔除的那个轮廓族。或者,参考间距为该图像中剩余的还未判断是否要剔除的轮廓族的标定间距的平均值。或者,参考间距为该图像中除第二轮廓族之外的剩余还未判断是否要剔除的轮廓族的标定间距的平均值,第二轮廓族为本次需要判断是否要剔除的那个轮廓族。
步骤503:根据该多对第一图像中提取出的标定点的像素坐标,标定双目相机的外参,外参包括可见光相机和热红外相机之间的平移矩阵和旋转矩阵。
经上述对多对第一图像进行标定点提取后,得到该多对第一图像中每个第一图像中标定点的像素坐标。之后,根据该多对第一图像中提取出的标定点的像素坐标,来标定双目相机的外参。可选地,还能够根据该多对第一图像中提取出的标定点的像素坐标,标定双目相机的内参,也即标定可见光相机和热红外相机各自的内参,当然,也可以根据其他方法标定两台相机各自的内参,本申请实施例对此不作限定。
在本申请实施例中,已知标定点的世界坐标,以及标定点之间的位置关系,例如标定点是等间距均匀分布的,基于此,能够确定每个第一图像中提取出的标定点的像素坐标与世界坐标的映射关系。之后,根据提取出的标定点的像素坐标和世界坐标的映射关系,求解出双目相机的内参和外参。
示例性地,假设一个第一图像中提取出的全部标定点的像素坐标的集合为{(u i,v i)},其中以i取不同的值来表示全部标定点。假设规定一个标定板上左上角的标定点的世界坐标为(0,0,0),那么对于等间距均匀分布的标定点,该标定 板上其他标定点的世界坐标也能够确定。这样,可以得到每个标定点在相应第一图像中的像素坐标与世界坐标的映射关系
Figure PCTCN2021139325-appb-000002
根据这些数据即可以求解出双目相机的内参和外参。
在本申请实施例中,内参求解采用张氏标定法。在采用张氏标定法的过程中,使用LM(Levenberg-Marquardt)方法优化结果,对于可见光相机和热红外相机均得到准确的内参。同时对于每对第一图像,得到两台相机各自的平移矩阵和旋转矩阵。其中,对于每对第一图像,可见光相机的平移矩阵和旋转矩阵分别记为R i l和T i l,热红外相机的平移矩阵和旋转矩阵记分别为R i r和T i r,其中以i取不同值表示不同的第一图像对。在其他一些实施例中,内参求解也可以采用其他方法。需要说明的是,在本申请实施例中,以下标l和r分别表示可见光相机和热红外相机的相关数据。
在本申请实施例中,外参求解的过程包括:对于每对第一图像,获取两台相机各自的平移矩阵和旋转矩阵,计算R i=R i r×(R i l) -1和T i=T i r-R i×T i l。然后,对R i和T i(i∈多对第一图像)分别求均值得到R和T,其中,R为两台相机之间的旋转矩阵,T为两台相机之间的平移矩阵。至此,标定出双目相机的外参R和T。
步骤504:将外参包括的平移矩阵中表示沿光轴方向的平移分量作减小处理,得到调整后的平移矩阵,根据旋转矩阵和调整后的平移矩阵,标定可见光相机和热红外相机各自的旋转量。
在本申请实施例中,标定出双目相机的外参之后,为了将两台相机的图像校正到极线对齐的状态,还需要计算两台相机各自的旋转量。
示例性地,基于Bouguet算法求解两台相机各自的旋转量。首先介绍标准的Bouguet算法的原理。
在给定两台相机之间的旋转矩阵R和平移矩阵T的情况下,求解可见光相机和热红外相机各自的旋转分量r l和r r。其中,r l和r r满足公式(2),这样,两台相机各自对应的旋转角度分别为R的一半,可以理解为将R分解得到r l和r r,分解的原则是使得两台相机的图像重投影造成的畸变最小,两台相机的视图的共同面积最大。
Figure PCTCN2021139325-appb-000003
在基于公式(2)求解出旋转分量r l和r r之后,给定两台相机之间的平移矩 阵T=[T x,T y,T z] T,考虑T中沿竖直方向的平移分量T y和沿光轴方向的平移分量T z,构造
Figure PCTCN2021139325-appb-000004
e 3=e 1×e 2,求出旋转调整矩阵
Figure PCTCN2021139325-appb-000005
计算可见光相机的校正矩阵为R l'=R rect×r l,得到可见光相机的旋转量R l',计算热红外相机的校正矩阵为R r'=R rect×r r,得到可见光相机的旋转量R r'。
可以理解为,可见光相机的校正矩阵是对步骤503中内参计算中同时得出的可见光相机的旋转矩阵的校正,校正后即得到可见光相机的旋转量。热红外相机的校正矩阵是对步骤503中内参计算中同时得出的热红外相机的旋转矩阵的校正,校正后即得到热红外相机的旋转量。R l'和R r'分别为标定得到的可见光相机的旋转量和热红外相机的旋转量。
可选地,根据两台相机各自的内参和校正矩阵,能够求解两台相机各自的投影矩阵。示例性地,可见光相机的投影矩阵P l=R l'×M l,热红外相机的投影矩阵P r=R r'×M r,其中,M l为可见光相机的内参,M r为热红外相机的内参。
以上对标准的Bouguet算法进行了相关介绍,标准的Bouguet算法能够实现双目相机的图像达到极线对齐的状态,并且保留两台相机的视图的共同面积最大。而对于可见光相机和热红外相机而言,两台相机的物理结构存在很大差异,两台相机的平移矩阵T中沿光轴方向的平移分量T z可能很大,基于T z计算旋转调整矩阵R rect,也即将T z作用于R rect,进而得到R l'和R r',这样会迫使两台相机在立体校正时,绕竖直轴方向上产生较大的旋转角,导致可见光图像和热红外图像对中大部分面积因旋转而损失掉,造成图像不可用的情况。为了解决这个问题,本申请实施例提出了一种保留光心偏置的立体校正方法,接下来将对此进行介绍。
在本申请实施例中,在基于步骤503得到双目相机的外参之后,将外参包括的平移矩阵中表示沿光轴方向的平移分量作减小处理,得到调整后的平移矩阵,根据外参包括的旋转矩阵和调整后的平移矩阵,确定可见光相机和热红外相机各自的旋转量。
可选地,将外参包括的平移矩阵中表示沿光轴方向的平移分量作减小处理的一种实现方式为:将外参包括的平移矩阵中表示沿光轴方向的平移分量置为零。当然,减小处理可以是指减小到比原值小,且不小于零的一个数值。
示例性地,在采用Bouguet算法计算旋转调整矩阵的过程中,将T中平移分 量T z置为零,之后计算旋转调整矩阵R rect,进而计算得到R l'和R r'。这样,在立体校正过程中不对沿光轴方向的偏移进行处理,校正后的图像可以保留更大的面积。
在本申请实施例中,将双目相机应用于计算图像深度时,是基于视差求距离公式进行的,而前述公式(1)所介绍的视差求距离公式的前提是,立体校正时考虑基于平移分量T z对沿光轴方向的偏移进行处理,而若在立体校正过程中未对沿光轴方向的偏移进行处理,也即若在立体标定时将T z置为零,那么需要重新确定视差求距离公式,也即重新确定视差与被拍摄物距离的映射关系。
也即是,在采用本申请实施例提供的保留光心偏置的立体校正方法的前提下,在标定出双目相机的内参和外参之后,根据标定的外参包括的平移矩阵,确定双目相机对应的视差与被拍摄物距离的映射关系。也即是,重新推导视差求距离公式,以重新推导出的视差求距离公式来表征视差与被拍摄物距离的映射关系。接下来将对重新推导视差求距离公式的过程进行介绍。
设有一个被测点(被拍摄物)A,被测点A在一台相机的相机坐标系下的坐标为(X,Y,Z),其中,Z即为被测点A到该相机的距离depth。根据小孔成像原理,被测点A在该相机中成像的像素横坐标公式为
Figure PCTCN2021139325-appb-000006
像素纵坐标公式为
Figure PCTCN2021139325-appb-000007
其中,f为该相机的焦距。
由于此时双目相机已经经过立体标定和校正,则此时被测点A在两台相机中的成像点的像素纵坐标是相等的,仅有横坐标不同,假设被测点A在可见光相机和热红外相机中的像素横坐标分别是u l和u r。另外,校正后两台相机的焦距f是相同的,光心也是相等的,设光心的坐标为(u 0,v 0)。再设标定出来的两台相机的平移矩阵为(T x,T y,T z) T,方向为从左相机(可见光相机)到右相机(热红外相机)。
如果未采用本申请实施例提供的保留光心偏置的立体校正方法,那么对于标定出来的相机,u l和u r的计算公式为
Figure PCTCN2021139325-appb-000008
两式相减,得到视差
Figure PCTCN2021139325-appb-000009
所以
Figure PCTCN2021139325-appb-000010
如果采用本申请实施例提供的保留光心偏置的立体校正方法,那么需要考虑 T z分量推导视差求距离公式,将u l和u r的计算公式调整为
Figure PCTCN2021139325-appb-000011
两式都把X移到等号的一边,得到
Figure PCTCN2021139325-appb-000012
然后移项和合并,得到(u l-u r)×Z=(u r-u 0)×T z-T x×f,所以有
Figure PCTCN2021139325-appb-000013
其中,u l-u r表示两台相机的视差disp,Z表示被拍摄物的距离depth。
接下来参照图6对本申请实施例提供的双目相机的立体标定方法再次进行简单的解释说明。图6是本申请实施例提供的另一种双目相机的立体标定方法的流程图,参见图6,输入标定图像(如前述实施例中的多对初始图像),对标定图像进行成像规格统一处理,也即进行相机模型统一。可选地,如果标定图像是组合板图像,需要对标定图像进行裁剪,得到单板图像。之后,对单板图像进行标定点提取,得到提取出的标定点的像素坐标,根据提取出的标定点的像素坐标,进行双目相机的内参和外参计算。之后,进行保留光心偏置的立体校正,确定两台相机各自的旋转量。这样,使用标定后的两台相机即可用于图像的立体校正,也即得到校正图像。
在本申请实施例中,使用立体标定后的双目相机来拍摄图像,立体标定后的双目相机可以用于需要计算图像深度的场景中,例如用于拍摄物体的距离测量、温度测量等场景中,距离测量和温度测量的准确度都能够很高。
综上所述,本申请实施例提供了一种双目相机的立体标定方法,先统一可见光相机和热红外相机的成像规格,这样在两台相机的成像规格统一的前提下进行后续的立体标定,立体标定能够准确有效。另外,在本方案中还考虑到可见光相机和热红外相机的物理结构存在较大差异,所以对沿光轴方向的平移分量作减小处理,在此基础上,确定两台相机各自的旋转量。这样,基于两台相机的旋转量旋转校正图像后,能够保留更多的图像,保证了图像的可用性,也即保证了立体标定的可靠性。本方法在结合本申请实施例提供的标定装置的情况下,立体标定的精度更高。
上述所有可选技术方案,均可按照任意结合形成本申请的可选实施例,本申请实施例对此不再一一赘述。
图7是本申请实施例提供的一种双目相机的立体标定装置的结构示意图,该双目相机的立体标定装置700可以由软件、硬件或者两者的结合实现成为计算机设备的部分或者全部。请参考图7,该装置700包括:规格统一模块701、标定点提取模块702、外参标定模块703和立体校正模块704。
规格统一模块701,用于对多对初始图像进行处理,得到成像规格统一的多对第一图像,该多对初始图像中的每对初始图像包括同一物体的可见光图像和热红外图像;
标定点提取模块702,用于对该多对第一图像进行标定点提取,得到该多对第一图像中提取出的标定点的像素坐标;
外参标定模块703,用于根据该多对第一图像中提取出的标定点的像素坐标,标定该双目相机的外参,外参包括该可见光相机和该热红外相机之间的平移矩阵和旋转矩阵;
立体校正模块704,用于将外参包括的平移矩阵中表示沿光轴方向的平移分量作减小处理,得到调整后的平移矩阵,根据该旋转矩阵和调整后的平移矩阵,标定该可见光相机和该热红外相机各自的旋转量。
可选地,标定点提取模块702包括:
轮廓提取单元,用于基于二值化处理的方式,提取该多对第一图像中每个第一图像中的轮廓;
标定点确定单元,用于对于该多对第一图像中的任一第一图像,根据该任一第一图像中的轮廓的像素坐标,确定该任一第一图像中提取出的标定点的像素坐标。
可选地,该多对第一图像中的每个第一图像中待提取的标定点是等间距均匀分布的;
标定点确定单元包括:
第一处理子单元,用于将该任一第一图像中相距在距离阈值之内,且数量不少于轮廓阈值的多个轮廓,组成轮廓族,得到该任一第一图像中的多个轮廓族;
第二处理子单元,用于对于该任一第一图像中的任一轮廓族,从该任一第一图像中的多个轮廓族中,获取与该任一轮廓族距离最近的目标轮廓族,计算该任一轮廓族的中心与该目标轮廓族的中心之间的像素横坐标差值和像素纵坐标差值,得到该任一轮廓族对应的像素横坐标差值和像素纵坐标差值,该像素 横坐标差值和该像素纵坐标差值不小于零;
第三处理子单元,用于计算该任一第一图像中每个轮廓族对应的像素横坐标差值和像素纵坐标差值的和,得到该任一第一图像中相应轮廓族对应的标定间距;
第四处理子单元,用于将该任一第一图像中对应的标定间距与参考间距之间的差距超过间距阈值的轮廓族剔除,将剩余的每个轮廓族的中心确定为提取出的一个标定点,将剩余的每个轮廓族的中心像素坐标确定为提取出的一个标定点的像素坐标。
可选地,该多对第一图像中的每对第一图像包括的可见光图像和热红外图像的分辨率统一;
规格统一模块包括:
缩放单元,用于根据该热红外相机对应的上采样倍数,对该多对初始图像中的热红外图像进行上采样,得到该多对第一图像中的热红外图像;
裁剪单元,用于根据该可见光相机对应的裁剪区域,对该多对初始图像中的可见光图像进行裁剪,得到该多对第一图像中的可见光图像。
可选地,该装置700还包括:
缩放参数确定模块,用于根据同一热源在一对或多对第二图像中的像素尺寸关系,确定该热红外相机对应的上采样倍数,该一对或多对第二图像中的每对第二图像包括同一热源的可见光图像和热红外图像;
裁剪区域确定模块,用于根据该热红外相机的原分辨率和该上采样倍数,以及同一热源在一对或多对第三图像中对应的像素坐标之间的差值关系,确定该可见光相机对应的裁剪区域,该一对或多对第三图像中的每对第三图像包括同一热源的可见光图像和热红外图像。
可选地,缩放参数确定模块包括:
比值计算单元,用于计算同一热源在该一对或多对第二图像中的每对第二图像中的像素长度之间的比值,和/或,计算同一热源在该一对或多对第二图像中的每对第二图像中的像素面积之间的比值,得到至少一个缩放比值;
参数确定单元,用于根据该至少一个缩放比值,确定该热红外相机对应的上采样倍数。
可选地,裁剪区域确定模块包括:
坐标差值计算单元,用于对于该一对或多对第三图像中的任一对第三图像, 分别计算同一热源在该任一对第三图像中的像素横坐标之间的差值和像素纵坐标之间的差值,得到该任一对第三图像对应的横坐标差值和纵坐标差值;
偏置确定单元,用于根据该一对或多对第三图像对应的横坐标差值,得到该可见光相机和该热红外相机之间的横坐标偏置,根据该一对或多对第三图像对应的纵坐标差值,得到该可见光相机与该热红外相机之间的纵坐标偏置;
分辨率确定单元,用于根据该热红外相机的原水平分辨率和该上采样倍数,确定统一后的水平分辨率,根据该热红外相机的原竖直分辨率和该上采样倍数,确定统一后的竖直分辨率;
裁剪区域确定单元,用于根据该可见光相机与该热红外相机之间的横坐标偏置和纵坐标偏置,以及统一后的水平分辨率和统一后的竖直分辨率,确定该可见光相机对应的裁剪区域。
综上所述,本申请实施例提供了一种双目相机的立体标定方法,先将可见光相机和热红外相机的成像规格校正统一,这样在两台相机的成像规格统一的前提下进行后续的立体标定,立体标定能够准确有效。另外,在本方案中还考虑到可见光相机和热红外相机的物理结构存在较大差异,所以对沿光轴方向的平移分量作减小处理,在此基础上,确定两台相机各自的旋转量。这样,基于两台相机的旋转量旋转校正图像后,能够保留更多的图像,保证了图像的可用性,也即保证了立体标定的可靠性。本方法在结合本申请实施例提供的标定装置的情况下,立体标定的精度更高。
需要说明的是:上述实施例提供的双目相机的立体标定装置在对双目相机进行立体标定时,仅以上述各功能模块的划分进行举例说明,实际应用中,可以根据需要而将上述功能分配由不同的功能模块完成,即将装置的内部结构划分成不同的功能模块,以完成以上描述的全部或者部分功能。另外,上述实施例提供的双目相机的立体标定装置与双目相机的立体标定方法实施例属于同一构思,其具体实现过程详见方法实施例,这里不再赘述。
图8是本申请实施例提供的一种计算机设备的结构示意图。该计算机设备用于对双目相机进行立体标定,也即该计算机设备用于实现前述实施例中的双目相机的立体标定方法。具体来讲:
计算机设备800包括中央处理单元(CPU)801、包括随机存取存储器(RAM)802和只读存储器(ROM)803的系统存储器804,以及连接系统存储器804和 中央处理单元801的系统总线805。计算机设备800还包括帮助计算机内的各个器件之间传输信息的基本输入/输出系统(I/O系统)806,和用于存储操作系统813、应用程序814和其他程序模块815的大容量存储设备807。
基本输入/输出系统806包括有用于显示信息的显示器808和用于用户输入信息的诸如鼠标、键盘之类的输入设备809。其中显示器808和输入设备809都通过连接到系统总线805的输入输出控制器810连接到中央处理单元801。基本输入/输出系统806还可以包括输入输出控制器810以用于接收和处理来自键盘、鼠标、或电子触控笔等多个其他设备的输入。类似地,输入输出控制器810还提供输出到显示屏、打印机或其他类型的输出设备。
大容量存储设备807通过连接到系统总线805的大容量存储控制器(未示出)连接到中央处理单元801。大容量存储设备807及其相关联的计算机可读介质为计算机设备800提供非易失性存储。也就是说,大容量存储设备807可以包括诸如硬盘或者CD-ROM驱动器之类的计算机可读介质(未示出)。
不失一般性,计算机可读介质可以包括计算机存储介质和通信介质。计算机存储介质包括以用于存储诸如计算机可读指令、数据结构、程序模块或其他数据等信息的任何方法或技术实现的易失性和非易失性、可移动和不可移动介质。计算机存储介质包括RAM、ROM、EPROM、EEPROM、闪存或其他固态存储其技术,CD-ROM、DVD或其他光学存储、磁带盒、磁带、磁盘存储或其他磁性存储设备。当然,本领域技术人员可知计算机存储介质不局限于上述几种。上述的系统存储器804和大容量存储设备807可以统称为存储器。
根据本申请的各种实施例,计算机设备800还可以通过诸如因特网等网络连接到网络上的远程计算机运行。也即计算机设备800可以通过连接在系统总线805上的网络接口单元811连接到网络812,或者说,也可以使用网络接口单元811来连接到其他类型的网络或远程计算机系统(未示出)。
上述存储器还包括一个或者一个以上的程序,一个或者一个以上程序存储于存储器中,被配置由CPU执行。所述一个或者一个以上程序包含用于进行本申请实施例提供的双目相机的立体标定方法的指令。
在一些实施例中,还提供了一种计算机可读存储介质,该存储介质内存储有计算机程序,所述计算机程序被处理器执行时实现上述实施例中双目相机的立体标定方法的步骤。例如,所述计算机可读存储介质可以是ROM、RAM、 CD-ROM、磁带、软盘和光数据存储设备等。
值得注意的是,本申请实施例提到的计算机可读存储介质可以为非易失性存储介质,换句话说,可以是非瞬时性存储介质。
应当理解的是,实现上述实施例的全部或部分步骤可以通过软件、硬件、固件或者其任意结合来实现。当使用软件实现时,可以全部或部分地以计算机程序产品的形式实现。所述计算机程序产品包括一个或多个计算机指令。所述计算机指令可以存储在上述计算机可读存储介质中。
也即是,在一些实施例中,还提供了一种包含指令的计算机程序产品,当其在计算机上运行时,使得计算机执行上述所述的双目相机的立体标定方法的步骤。
应当理解的是,本文提及的“至少一个”是指一个或多个,“多个”是指两个或两个以上。在本申请实施例的描述中,除非另有说明,“/”表示或的意思,例如,A/B可以表示A或B;本文中的“和/或”仅仅是一种描述关联对象的关联关系,表示可以存在三种关系,例如,A和/或B,可以表示:单独存在A,同时存在A和B,单独存在B这三种情况。另外,为了便于清楚描述本申请实施例的技术方案,在本申请的实施例中,采用了“第一”、“第二”等字样对功能和作用基本相同的相同项或相似项进行区分。本领域技术人员可以理解“第一”、“第二”等字样并不对数量和执行次序进行限定,并且“第一”、“第二”等字样也并不限定一定不同。
以上所述为本申请提供的实施例,并不用以限制本申请,凡在本申请的精神和原则之内,所作的任何修改、等同替换、改进等,均应包含在本申请的保护范围之内。

Claims (14)

  1. 一种双目相机的立体标定方法,其特征在于,所述双目相机包括可见光相机和热红外相机,所述方法包括:
    对多对初始图像进行处理,得到成像规格统一的多对第一图像,所述多对初始图像中的每对初始图像包括同一物体的可见光图像和热红外图像;
    对所述多对第一图像进行标定点提取,得到所述多对第一图像中提取出的标定点的像素坐标;
    根据所述多对第一图像中提取出的标定点的像素坐标,标定所述双目相机的外参,所述外参包括所述可见光相机和所述热红外相机之间的平移矩阵和旋转矩阵;
    将所述外参包括的平移矩阵中表示沿光轴方向的平移分量作减小处理,得到调整后的平移矩阵,根据所述旋转矩阵和所述调整后的平移矩阵,标定所述可见光相机和所述热红外相机各自的旋转量。
  2. 根据权利要求1所述的方法,其特征在于,所述对所述多对第一图像进行标定点提取,得到所述多对第一图像中提取出的标定点的像素坐标,包括:
    基于二值化处理的方式,提取所述多对第一图像中每个第一图像中的轮廓;
    对于所述多对第一图像中的任一第一图像,根据所述任一第一图像中的轮廓的像素坐标,确定所述任一第一图像中提取出的标定点的像素坐标。
  3. 根据权利要求2所述的方法,其特征在于,所述多对第一图像中的每个第一图像中待提取的标定点是等间距均匀分布的;
    所述根据所述任一第一图像中的轮廓的像素坐标,确定所述任一第一图像中提取出的标定点的像素坐标,包括:
    将所述任一第一图像中相距在距离阈值之内,且数量不少于轮廓阈值的多个轮廓,组成轮廓族,得到所述任一第一图像中的多个轮廓族;
    对于所述任一第一图像中的任一轮廓族,从所述任一第一图像中的多个轮廓族中,获取与所述任一轮廓族距离最近的目标轮廓族,计算所述任一轮廓族的中心与所述目标轮廓族的中心之间的像素横坐标差值和像素纵坐标差值,得到所述任一轮廓族对应的像素横坐标差值和像素纵坐标差值,所述像素横坐标 差值和所述像素纵坐标差值不小于零;
    计算所述任一第一图像中每个轮廓族对应的像素横坐标差值和像素纵坐标差值的和,得到所述任一第一图像中相应轮廓族对应的标定间距;
    将所述任一第一图像中对应的标定间距与参考间距之间的差距超过间距阈值的轮廓族剔除,将剩余的每个轮廓族的中心确定为提取出的一个标定点,将剩余的每个轮廓族的中心像素坐标确定为提取出的一个标定点的像素坐标。
  4. 根据权利要求1-3任一所述的方法,其特征在于,所述多对第一图像中的每对第一图像包括的可见光图像和热红外图像的分辨率统一;
    所述对多对初始图像进行处理,得到成像规格统一的多对第一图像,包括:
    根据所述热红外相机对应的上采样倍数,对所述多对初始图像中的热红外图像进行上采样,得到所述多对第一图像中的热红外图像;
    根据所述可见光相机对应的裁剪区域,对所述多对初始图像中的可见光图像进行裁剪,得到所述多对第一图像中的可见光图像。
  5. 根据权利要求4所述的方法,其特征在于,所述对多对初始图像进行处理,得到成像规格统一的多对第一图像之前,还包括:
    根据同一热源在一对或多对第二图像中的像素尺寸关系,确定所述热红外相机对应的上采样倍数,所述一对或多对第二图像中的每对第二图像包括同一热源的可见光图像和热红外图像;
    根据所述热红外相机的原分辨率和所述上采样倍数,以及同一热源在一对或多对第三图像中对应的像素坐标之间的差值关系,确定所述可见光相机对应的裁剪区域,所述一对或多对第三图像中的每对第三图像包括同一热源的可见光图像和热红外图像。
  6. 根据权利要求5所述的方法,其特征在于,所述根据同一热源在一对或多对第二图像中的像素尺寸关系,确定所述热红外相机对应的上采样倍数,包括:
    计算同一热源在所述一对或多对第二图像中的每对第二图像中的像素长度之间的比值,和/或,计算同一热源在所述一对或多对第二图像中的每对第二图像中的像素面积之间的比值,得到至少一个缩放比值;
    根据所述至少一个缩放比值,确定所述热红外相机对应的上采样倍数。
  7. 根据权利要求5所述的方法,其特征在于,所述根据所述热红外相机的分辨率和所述上采样倍数,以及同一热源在一对或多对第三图像中对应的像素坐标之间的差值关系,确定所述可见光相机对应的裁剪区域,包括:
    对于所述一对或多对第三图像中的任一对第三图像,分别计算同一热源在所述任一对第三图像中的像素横坐标之间的差值和像素纵坐标之间的差值,得到所述任一对第三图像对应的横坐标差值和纵坐标差值;
    根据所述一对或多对第三图像对应的横坐标差值,得到所述可见光相机和所述热红外相机之间的横坐标偏置,根据所述一对或多对第三图像对应的纵坐标差值,得到所述可见光相机与所述热红外相机之间的纵坐标偏置;
    根据所述热红外相机的原水平分辨率和所述上采样倍数,确定统一后的水平分辨率,根据所述热红外相机的原竖直分辨率和所述上采样倍数,确定统一后的竖直分辨率;
    根据所述可见光相机与所述热红外相机之间的横坐标偏置和纵坐标偏置,以及所述统一后的水平分辨率和所述统一后的竖直分辨率,确定所述可见光相机对应的裁剪区域。
  8. 一种双目相机的立体标定装置,其特征在于,所述双目相机包括可见光相机和热红外相机,所述装置包括:
    规格统一模块,用于对多对初始图像进行处理,得到成像规格统一的多对第一图像,所述多对初始图像中的每对初始图像包括同一物体的可见光图像和热红外图像;
    标定点提取模块,用于对所述多对第一图像进行标定点提取,得到所述多对第一图像中提取出的标定点的像素坐标;
    外参标定模块,用于根据所述多对第一图像中提取出的标定点的像素坐标,标定所述双目相机的外参,所述外参包括所述可见光相机和所述热红外相机之间的平移矩阵和旋转矩阵;
    立体校正模块,用于将所述外参包括的平移矩阵中表示沿光轴方向的平移分量作减小处理,得到调整后的平移矩阵,根据所述旋转矩阵和调整后的平移 矩阵,标定所述可见光相机和所述热红外相机各自的旋转量。
  9. 一种标定装置,其特征在于,所述标定装置用于实现权利要求1-7任一所述的双目相机的立体标定方法中图像的采集;
    所述标定装置包括标定板和补光补热装置;
    所述标定板为金属板,所述标定板上等间距均匀分布有孔洞,所述孔洞的孔壁具有倾斜角度;
    所述补光补热装置包括补光装置、反射板和补热装置,所述补光装置固定在所述标定板的背面,所述反射板固定在与所述标定板的背面相距一个散热距离的位置,所述补热装置固定在所述反射板的背面;
    所述补光装置用于向所述反射板发出光线,所述补热装置用于发出热量,所述反射板的反射面为漫反射面,所述反射板用于通过所述漫反射面向所述标定板反射所述光线,以及向所述标定板传递热量。
  10. 根据权利要求9所述的标定装置,其特征在于,所述标定装置为单板标定装置,所述单板标定装置包括一个标定板和一套补光补热装置;或者,
    所述标定装置为第一组合板标定装置,所述第一组合板标定装置包括多个标定板和多套补光补热装置,所述多个标定板和所述多套补光补热装置一一对应,所述多个标定板的位姿不同;或者,
    所述标定装置为第二组合板标定装置,所述第二组合板标定装置包括所述多个标定板和一套补光补热装置,所述多个标定板的位姿不同。
  11. 一种双目相机系统,其特征在于,所述双目相机系统包括双目相机和处理器;
    所述双目相机包括可见光相机和热红外相机,所述可见光相机和所述热红外相机拍摄同一物体得到的可见光图像和热红外图像作为一对初始图像;
    所述处理器,用于对多对初始图像进行处理,得到成像规格统一的多对第一图像;
    所述处理器,还用于对所述多对第一图像进行标定点提取,得到所述多对第一图像中提取出的标定点的像素坐标;
    所述处理器,还用于根据所述多对第一图像中提取出的标定点的像素坐标,标定所述双目相机的外参,所述外参包括所述可见光相机和所述热红外相机之间的平移矩阵和旋转矩阵;
    所述处理器,还用于将所述外参包括的平移矩阵中表示沿光轴方向的平移分量作减小处理,得到调整后的平移矩阵,根据所述旋转矩阵和所述调整后的平移矩阵,标定所述可见光相机和所述热红外相机各自的旋转量。
  12. 一种双目相机,其特征在于,所述双目相机包括可见光相机和热红外相机,所述可见光相机和所述热红外相机根据权利要求1-7任一所述的方法进行立体标定。
  13. 一种计算机可读存储介质,其特征在于,所述计算机可读存储介质内存储有计算机程序,所述计算机程序被处理器执行时实现权利要求1-7任一所述的方法的步骤。
  14. 一种计算机程序产品,其特征在于,所述计算机程序产品包含计算机指令,所述计算机指令在计算机上运行时实现权利要求1-7任一所述的方法的步骤。
PCT/CN2021/139325 2020-12-18 2021-12-17 双目相机的立体标定方法、装置、系统及双目相机 WO2022127918A1 (zh)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN202011510044.0 2020-12-18
CN202011510044.0A CN112634374B (zh) 2020-12-18 2020-12-18 双目相机的立体标定方法、装置、系统及双目相机

Publications (1)

Publication Number Publication Date
WO2022127918A1 true WO2022127918A1 (zh) 2022-06-23

Family

ID=75317965

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2021/139325 WO2022127918A1 (zh) 2020-12-18 2021-12-17 双目相机的立体标定方法、装置、系统及双目相机

Country Status (2)

Country Link
CN (1) CN112634374B (zh)
WO (1) WO2022127918A1 (zh)

Cited By (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116091475A (zh) * 2023-02-17 2023-05-09 七海行(深圳)科技有限公司 一种喷洒效果的确认方法及装置
CN116091488A (zh) * 2023-03-07 2023-05-09 西安航天动力研究所 一种发动机摇摆试验的位移测试方法及位移测试系统
CN117061719A (zh) * 2023-08-11 2023-11-14 元橡科技(北京)有限公司 一种车载双目相机视差校正方法
WO2024087982A1 (zh) * 2022-10-28 2024-05-02 华为技术有限公司 一种图像处理方法及电子设备
WO2024146026A1 (zh) * 2023-01-03 2024-07-11 宁德时代新能源科技股份有限公司 相机像素标定装置、卷绕设备

Families Citing this family (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112634374B (zh) * 2020-12-18 2023-07-14 杭州海康威视数字技术股份有限公司 双目相机的立体标定方法、装置、系统及双目相机
CN113240749B (zh) * 2021-05-10 2024-03-29 南京航空航天大学 一种面向海上舰船平台无人机回收的远距离双目标定与测距方法
CN113470116B (zh) * 2021-06-16 2023-09-01 杭州海康威视数字技术股份有限公司 对摄像装置标定数据的验证方法、装置、设备及存储介质
CN113393383B (zh) * 2021-08-17 2021-11-16 常州市新创智能科技有限公司 一种双深度相机拍照图像的拼接方法
CN113763573B (zh) * 2021-09-17 2023-07-11 北京京航计算通讯研究所 一种三维物体数字化标注方法及装置
CN113808220A (zh) * 2021-09-24 2021-12-17 上海闻泰电子科技有限公司 双目摄像机的标定方法、系统、电子设备和存储介质
CN116503492B (zh) * 2023-06-27 2024-06-14 北京鉴智机器人科技有限公司 自动驾驶系统中双目相机模组标定方法及标定装置

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104374547A (zh) * 2014-11-17 2015-02-25 国家电网公司 可见光相机与红外热像仪相机参数联合标定的方法及装置
WO2018086348A1 (zh) * 2016-11-09 2018-05-17 人加智能机器人技术(北京)有限公司 双目立体视觉系统及深度测量方法
CN110969669A (zh) * 2019-11-22 2020-04-07 大连理工大学 基于互信息配准的可见光与红外相机联合标定方法
CN110969670A (zh) * 2019-11-22 2020-04-07 大连理工大学 基于显著特征的多光谱相机动态立体标定算法
CN112634374A (zh) * 2020-12-18 2021-04-09 杭州海康威视数字技术股份有限公司 双目相机的立体标定方法、装置、系统及双目相机

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR101806045B1 (ko) * 2016-10-17 2017-12-07 한국기초과학지원연구원 적외선 및 가시광 카메라의 실시간 이미지 합성 장치 및 그 제어 방법

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104374547A (zh) * 2014-11-17 2015-02-25 国家电网公司 可见光相机与红外热像仪相机参数联合标定的方法及装置
WO2018086348A1 (zh) * 2016-11-09 2018-05-17 人加智能机器人技术(北京)有限公司 双目立体视觉系统及深度测量方法
CN110969669A (zh) * 2019-11-22 2020-04-07 大连理工大学 基于互信息配准的可见光与红外相机联合标定方法
CN110969670A (zh) * 2019-11-22 2020-04-07 大连理工大学 基于显著特征的多光谱相机动态立体标定算法
CN112634374A (zh) * 2020-12-18 2021-04-09 杭州海康威视数字技术股份有限公司 双目相机的立体标定方法、装置、系统及双目相机

Cited By (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2024087982A1 (zh) * 2022-10-28 2024-05-02 华为技术有限公司 一种图像处理方法及电子设备
WO2024146026A1 (zh) * 2023-01-03 2024-07-11 宁德时代新能源科技股份有限公司 相机像素标定装置、卷绕设备
CN116091475A (zh) * 2023-02-17 2023-05-09 七海行(深圳)科技有限公司 一种喷洒效果的确认方法及装置
CN116091488A (zh) * 2023-03-07 2023-05-09 西安航天动力研究所 一种发动机摇摆试验的位移测试方法及位移测试系统
CN116091488B (zh) * 2023-03-07 2023-07-14 西安航天动力研究所 一种发动机摇摆试验的位移测试方法及位移测试系统
CN117061719A (zh) * 2023-08-11 2023-11-14 元橡科技(北京)有限公司 一种车载双目相机视差校正方法
CN117061719B (zh) * 2023-08-11 2024-03-08 元橡科技(北京)有限公司 一种车载双目相机视差校正方法

Also Published As

Publication number Publication date
CN112634374B (zh) 2023-07-14
CN112634374A (zh) 2021-04-09

Similar Documents

Publication Publication Date Title
WO2022127918A1 (zh) 双目相机的立体标定方法、装置、系统及双目相机
US10609282B2 (en) Wide-area image acquiring method and apparatus
WO2023045147A1 (zh) 双目摄像机的标定方法、系统、电子设备和存储介质
CN110809786B (zh) 校准装置、校准图表、图表图案生成装置和校准方法
WO2019100933A1 (zh) 用于三维测量的方法、装置以及系统
US8867827B2 (en) Systems and methods for 2D image and spatial data capture for 3D stereo imaging
CN101630406B (zh) 摄像机的标定方法及摄像机标定装置
US8447099B2 (en) Forming 3D models using two images
US11282232B2 (en) Camera calibration using depth data
WO2021139176A1 (zh) 基于双目摄像机标定的行人轨迹跟踪方法、装置、计算机设备及存储介质
WO2017076106A1 (zh) 图像的拼接方法和装置
CN112150528A (zh) 一种深度图像获取方法及终端、计算机可读存储介质
KR101903619B1 (ko) 구조화된 스테레오
WO2010028559A1 (zh) 图像拼接方法及装置
WO2020119467A1 (zh) 高精度稠密深度图像的生成方法和装置
CN109656033B (zh) 一种区分液晶显示屏灰尘和缺陷的方法及装置
CN111862224A (zh) 确定相机与激光雷达之间外参的方法和装置
WO2022142139A1 (zh) 投影面选取和投影图像校正方法、装置、投影仪及介质
WO2022100668A1 (zh) 温度测量方法、装置、系统、存储介质及程序产品
WO2022218161A1 (zh) 用于目标匹配的方法、装置、设备及存储介质
KR20240089161A (ko) 촬영 측정 방법, 장치, 기기 및 저장 매체
CN114299156A (zh) 无重叠区域下多相机的标定与坐标统一方法
JP7489253B2 (ja) デプスマップ生成装置及びそのプログラム、並びに、デプスマップ生成システム
DK3189493T3 (en) PERSPECTIVE CORRECTION OF DIGITAL PHOTOS USING DEPTH MAP
CN112970044A (zh) 根据广角图像的视差估计

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 21905841

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 21905841

Country of ref document: EP

Kind code of ref document: A1

122 Ep: pct application non-entry in european phase

Ref document number: 21905841

Country of ref document: EP

Kind code of ref document: A1

32PN Ep: public notification in the ep bulletin as address of the adressee cannot be established

Free format text: NOTING OF LOSS OF RIGHTS PURSUANT TO RULE 112(1) EPC (EPO FORM 1205A DATED 23.01.2024)

122 Ep: pct application non-entry in european phase

Ref document number: 21905841

Country of ref document: EP

Kind code of ref document: A1