[go: up one dir, main page]
More Web Proxy on the site http://driver.im/

CN114693760A - Image correction method, device and system and electronic equipment - Google Patents

Image correction method, device and system and electronic equipment Download PDF

Info

Publication number
CN114693760A
CN114693760A CN202011567624.3A CN202011567624A CN114693760A CN 114693760 A CN114693760 A CN 114693760A CN 202011567624 A CN202011567624 A CN 202011567624A CN 114693760 A CN114693760 A CN 114693760A
Authority
CN
China
Prior art keywords
image
correction
target
pair
alignment
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202011567624.3A
Other languages
Chinese (zh)
Inventor
刘梦晗
郁理
凤维刚
王进
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Rainbow Software Co ltd
Original Assignee
Rainbow Software Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Rainbow Software Co ltd filed Critical Rainbow Software Co ltd
Priority to CN202011567624.3A priority Critical patent/CN114693760A/en
Priority to PCT/CN2021/141355 priority patent/WO2022135588A1/en
Priority to KR1020237021758A priority patent/KR20230110618A/en
Publication of CN114693760A publication Critical patent/CN114693760A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/30Determination of transform parameters for the alignment of images, i.e. image registration
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/50Depth or shape recovery
    • G06T7/55Depth or shape recovery from multiple images
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/22Matching criteria, e.g. proximity measures
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/80Geometric correction
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/97Determining parameters from multiple pictures
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • G06V10/44Local feature extraction by analysis of parts of the pattern, e.g. by detecting edges, contours, loops, corners, strokes or intersections; Connectivity analysis, e.g. of connected components
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/74Image or video pattern matching; Proximity measures in feature spaces
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20016Hierarchical, coarse-to-fine, multiscale or multiresolution image processing; Pyramid transform

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • General Physics & Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • Multimedia (AREA)
  • Artificial Intelligence (AREA)
  • Evolutionary Computation (AREA)
  • General Engineering & Computer Science (AREA)
  • Evolutionary Biology (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Health & Medical Sciences (AREA)
  • Computing Systems (AREA)
  • Databases & Information Systems (AREA)
  • General Health & Medical Sciences (AREA)
  • Medical Informatics (AREA)
  • Software Systems (AREA)
  • Image Processing (AREA)
  • Geometry (AREA)

Abstract

The invention discloses an image correction method, an image correction device, an image correction system and electronic equipment. The image correction method comprises the following steps: acquiring a visible light image and a depth image which are shot for a target object, and forming a basic image pair after transformation, wherein the basic image pair comprises a first image and a second image; correcting the basic image pair by adopting a preset correction mode to obtain a plurality of correction parameters; and carrying out alignment correction on the basic image pair based on each correction parameter to obtain a target image pair. The invention solves the technical problems that the dynamic correction between two different cameras cannot be realized, the adaptive environment is low, the image alignment effect is poor, and the use interest of a user is easily influenced in the related technology.

Description

Image correction method, device and system and electronic equipment
Technical Field
The invention relates to the technical field of image processing, in particular to an image correction method, an image correction device, an image correction system and electronic equipment.
Background
In the related art, with the continuous improvement of hardware technology, depth imaging becomes more and more accurate, and the application based on depth information also develops rapidly, and the common depth imaging mode at present mainly falls into three: binocular stereo imaging, structured light imaging, and Time of Flight imaging (ToF).
Binocular stereo imaging needs to use two RGB cameras to acquire images simultaneously, and then depth information is obtained by using a triangulation method through binocular matching. The method has the advantages of low cost, low power consumption and higher image resolution, but the depth calculation is completely obtained by algorithm calculation, so the method has high requirement on calculation resources, has poor real-time performance and is sensitive to an imaging environment.
The structured light imaging is to emit laser with specific pattern (speckle or lattice) by a camera, when the measured object reflects the pattern, the reflected pattern is captured by the camera, and the size of the speckle or point on the reflected pattern is calculated, so as to measure and calculate the distance between the measured object and the camera. The advantage is mainly not influenced by the texture of the object, but laser speckle can be submerged under strong light, so the laser is not suitable for outdoor use.
And (3) imaging by using the ToF camera, and directly obtaining the depth information of the measured point by calculating the time difference between the emission signal and the reflection signal. The method has the advantages of high real-time performance, no influence of illumination change and object texture, low image resolution, large module and high hardware cost.
When various terminal devices are used, the relative positions of a depth camera and a visible light camera need to be aligned, and currently, in order to reduce jitter during shooting, many visible light cameras adopt Optical Image Stabilization (OIS) and Auto Focus (AF) to improve the definition of a shot Image, and these mechanisms can cause the relative positions of two cameras to change, and the focal length and the Optical center of the cameras to change. In addition, a device drop, frame asynchrony between cameras, or different frame rates may also cause the relative positions of the two cameras to change. When the changes occur, the internal parameters and the external parameters of the two cameras are changed, and if the calibrated parameters are still used, the alignment precision between the depth image and the visible light image is influenced, so that the subsequent algorithm effect depending on the depth information is influenced. Often, the method does not have a recalibration environment, and cannot realize dynamic correction, so that the image alignment effect is poor, and the use interest of a user is influenced.
In view of the above problems, no effective solution has been proposed.
Disclosure of Invention
The embodiment of the invention provides an image correction method, an image correction device, an image correction system and electronic equipment, which at least solve the technical problems that dynamic correction between two different cameras cannot be realized, the adaptive environment is low, the image alignment effect is poor and the use interest of a user is easily influenced in the related technology.
According to an aspect of an embodiment of the present invention, there is provided an image correction method including: acquiring a visible light image and a depth image shot for a target object, and forming a basic image pair after transformation, wherein the basic image pair comprises a first image and a second image; correcting the basic image pair by adopting a preset correction mode to obtain a plurality of correction parameters; and carrying out alignment correction on the basic image pair based on each correction parameter to obtain a target image pair.
Optionally, the step of performing correction processing on the basic image pair by using a preset correction mode to obtain a plurality of correction parameters includes: and zooming the basic image pair to a preset resolution ratio, and carrying out pyramid correction processing to obtain the plurality of correction parameters.
Optionally, the step of obtaining a visible light image and a depth image of the target object, and transforming the visible light image and the depth image to form a base image pair includes: and based on preset calibration parameters, converting the depth image into an image coordinate system of the visible light image, and adjusting to obtain a preliminary alignment depth map having the same resolution as the visible light image, wherein the visible light image and the preliminary alignment depth map are combined to form the basic image pair, the first image is the visible light image, and the second image is the preliminary alignment depth map.
Optionally, the step of performing a calibration process on the basic image pair by using a preset calibration mode to obtain a plurality of calibration parameters further includes: determining a target translation parameter and a target scaling factor between the first image and the second image; and determining a plurality of correction parameters based on the target translation parameter and the target scaling factor.
Optionally, before performing a correction process on the base image pair by using a preset correction mode to obtain a plurality of correction parameters, the image correction method further includes: preprocessing the preliminary alignment depth map in the basic image pair to obtain the first image; and filtering the visible light image in the basic image pair to obtain the second image.
Optionally, the step of determining a target translation parameter and a target scaling factor between the first image and the second image includes: calculating the target translation parameter of the first image relative to the second image, and translating the first image based on the target translation parameter to obtain a third image; selecting a plurality of scaling coefficients, scaling the third image by each scaling coefficient, and calculating an image matching score between the third image and the second image; and taking the scaling coefficient corresponding to the minimum score in the image matching scores as a target scaling coefficient.
Optionally, the step of determining a target translation parameter and a target scaling factor between the first image and the second image includes: calculating the target translation parameter of the first image relative to the second image, and translating the first image based on the target translation parameter to obtain a fourth image; selecting a plurality of scaling coefficients, scaling the fourth image by each scaling coefficient, and calculating an image matching score between the fourth image and the second image; and adjusting the scaling coefficient until the change of the score in the image matching score is smaller than a first threshold value, and taking the scaling coefficient corresponding to the image matching score as a target scaling coefficient.
Optionally, the step of determining a target translation parameter and a target scaling factor between the first image and the second image includes: selecting a plurality of scaling coefficients, and scaling the first image by each scaling coefficient; sliding the first image scaled based on each of the scaling factors on the second image, and calculating a score of image matching between the first image and the second image; and setting the scaling coefficient and the translation amount corresponding to the minimum score among the plurality of image matching scores as the target scaling coefficient and the target translation parameter.
Optionally, the step of preprocessing the preliminary aligned depth map in the base image pair to obtain the first image includes: mapping the depth value of each pixel point in the preliminary alignment depth map in the basic image pair to a preset pixel range; and/or adjusting the image contrast of the preliminary alignment depth map to obtain the first image.
Optionally, the step of determining a target translation parameter and a target scaling factor between the first image and the second image includes: extracting image features of the first image to obtain a first feature subset, wherein the first feature subset comprises a first distance image, a first boundary directional diagram and first mask information; extracting image features of the second image to obtain a second feature subset, wherein the second feature subset comprises a second distance image and a second boundary directional diagram; based on the first feature subset and the second feature subset, a target translation parameter of the first image relative to the second image is calculated.
Optionally, the step of extracting the image feature of the first image to obtain a first feature subset includes: extracting all boundary pixel points of each target object in the first image to obtain a first edge image; carrying out reverse color processing on the first edge image to obtain a second edge image; extracting a contour from the first edge image to obtain a first contour array, and calculating a pixel point direction corresponding to each pixel point based on the first contour array to obtain a first contour direction array; performing preset distance transformation processing on the second edge image based on a first preset distance threshold to obtain the first distance image; calculating a first boundary directional diagram corresponding to each target object boundary in the second edge image based on the first contour direction array; the first feature subset is determined based on the first range image and the first boundary directional diagram.
Optionally, the step of performing preset distance transformation processing on the second edge image based on a first preset distance threshold to obtain a first distance image includes: determining the first mask information based on the first preset distance threshold, wherein the first mask information is used for shielding partial edge information in the second image; adding the first mask information to the first subset of features.
Optionally, the step of extracting the image feature of the second image to obtain a second feature subset includes: extracting all boundary pixel points of each target object in the second image to obtain a third edge image; deleting the outline in the third edge image by using the first mask information; performing reverse color processing on the third edge image subjected to the deletion processing to obtain a fourth edge image; extracting the contour of the fourth edge image to obtain a second contour array, and calculating the pixel point direction corresponding to each pixel point based on the second contour array to obtain a second contour direction array; performing preset distance conversion processing on the fourth edge image based on a second preset distance threshold value to obtain a second distance image; calculating the second boundary directional diagram corresponding to each target object boundary in the fourth edge image based on the second contour direction array; and obtaining the second feature subset based on the second distance image and the second boundary directional diagram.
Optionally, the step of calculating a target translation parameter of the first image relative to the second image based on the first feature subset and the second feature subset includes: extracting contour pixel points with the pixel distance smaller than a first distance threshold value in the first distance image and the second distance image by adopting a first judgment condition to obtain a first contour pixel point set participating in image matching; extracting contour pixel points with the pixel distance smaller than a second distance threshold value in the first boundary directional diagram and the second boundary directional diagram by adopting a second judgment condition to obtain a second contour pixel point set participating in image matching; determining a chamfer distance score, a direction diagram distance and an image adjustment factor between the first image and the second image based on the first contour pixel point set and the second contour pixel point set, wherein the image adjustment factor is used for adjusting the chamfer distance score and the direction diagram distance proportion; sliding the second image on the first image, inputting the chamfer distance score, the direction graph distance and the image adjusting factor into a first preset formula, and calculating an image sliding score; determining a target sliding position corresponding to the minimum score in all image sliding scores; and determining target translation parameters based on the target sliding position.
Optionally, the step of scaling the base image pair to a preset resolution and performing pyramid correction processing to obtain the plurality of correction parameters includes: acquiring an alignment precision value applied by a terminal, and determining a plurality of correction resolutions based on the alignment precision value and the resolution of the base image pair, wherein the plurality of correction resolutions at least comprise: a preset resolution, which is the minimum resolution among the plurality of correction resolutions; and zooming the basic image pair to the preset resolution, and carrying out pyramid correction processing until the alignment precision value is met to obtain the plurality of correction parameters.
Optionally, the image correction method further includes: determining the image alignment requirement precision of the terminal application under the target image resolution; step S1, determining whether the current alignment accuracy image corresponding to the target image pair meets the required image alignment accuracy at the first image resolution; step S2, if it is determined that the current alignment accuracy image corresponding to the target image pair does not reach the required image alignment accuracy, adjusting the image resolution to a second image resolution, wherein a resolution value of the second image resolution is higher than the first image resolution; step S3, performing a correction process on the basic image pair using a preset correction mode to obtain a plurality of correction parameters; a step S4 of performing alignment correction on the base image pair based on each of the correction parameters to obtain a target image pair; the steps S1 to S4 are repeatedly executed until the current alignment accuracy image reaches the above-mentioned image alignment required accuracy.
Optionally, the image correction method further includes: comparing the image resolution of the visible light image with the image resolution of the depth image to obtain a comparison result with the minimum resolution; calculating a correction frequency threshold value based on the image resolution obtained by the comparison result and the initially set maximum correction processing resolution; in the alignment correction process, if the number of times of image correction reaches the correction number threshold, the correction process is stopped.
According to another aspect of the embodiments of the present invention, there is also provided an image correction apparatus including: the device comprises an acquisition unit, a processing unit and a display unit, wherein the acquisition unit is used for acquiring a visible light image and a depth image which are shot for a target object, and forming a basic image pair after transformation, wherein the basic image pair comprises a first image and a second image; the first correction unit is used for correcting the basic image pair by adopting a preset correction mode to obtain a plurality of correction parameters; and the second correction unit is used for carrying out alignment correction on the basic image pair based on each correction parameter to obtain a target image pair.
Optionally, the first correction unit includes: and the first correction module is used for scaling the basic image pair to a preset resolution ratio and carrying out pyramid correction processing to obtain the plurality of correction parameters.
Optionally, the obtaining unit includes: a first transformation module, configured to transform the depth image into an image coordinate system of the visible light image based on preset calibration parameters, and adjust the image coordinate system to obtain a preliminary alignment depth map having the same resolution as the visible light image, where the visible light image and the preliminary alignment depth map are combined to form the basic image pair, the first image is the visible light image, and the second image is the preliminary alignment depth map.
Optionally, the first correction unit further includes: a first determining module, configured to determine a target translation parameter and a target scaling factor between the first image and the second image; and the second determining module is used for determining a plurality of correction parameters based on the target translation parameter and the target scaling coefficient.
Optionally, the image correction apparatus further includes: a first processing unit, configured to perform preprocessing on a preliminary alignment depth map in the base image pair to obtain the first image before performing correction processing on the base image pair in a preset correction mode to obtain a plurality of correction parameters; and the second processing unit is used for carrying out filtering processing on the visible light image in the basic image pair to obtain the second image.
Optionally, the first determining module includes: a first calculating module, configured to calculate the target translation parameter of the first image relative to the second image, and translate the first image based on the target translation parameter to obtain a third image; a first scaling module, configured to select multiple scaling coefficients, scale the third image with each scaling coefficient, and calculate an image matching score between the third image and the second image; and the second determining module is used for taking the scaling coefficient corresponding to the minimum score in the image matching scores as the target scaling coefficient.
Optionally, the first determining module further includes: a second calculating module, configured to calculate the target translation parameter of the first image relative to the second image, and translate the first image based on the target translation parameter to obtain a fourth image; a second scaling module, configured to select multiple scaling coefficients, scale the fourth image with each scaling coefficient, and calculate an image matching score between the fourth image and the second image; and a third determining module, configured to adjust the scaling factor until a change in a score in the image matching score is smaller than a first threshold, where a scaling factor corresponding to the image matching score is used as a target scaling factor.
Optionally, the first determining module further includes: a third scaling module, configured to select multiple scaling coefficients, and scale the first image with each scaling coefficient; a third calculating module configured to slide the first image scaled based on each of the scaling coefficients on the second image, and calculate a score of image matching between the first image and the second image; a fourth determining module, configured to use a scaling factor and a translation amount corresponding to a minimum score among the plurality of image matching scores as the target scaling factor and the target translation parameter.
Optionally, the first processing unit includes: a first mapping module, configured to map a depth value of each pixel point in the preliminary alignment depth map in the basic image pair to a preset pixel range; and/or the first adjusting module is used for adjusting the image contrast of the preliminary alignment depth map to obtain the first image.
Optionally, the first determining module further includes: the first extraction module is used for extracting image features of the first image to obtain a first feature subset, wherein the first feature subset comprises a first distance image, a first boundary directional diagram and first mask information; a second extraction module, configured to extract image features of the second image to obtain a second feature subset, where the second feature subset includes a second distance image and a second boundary directional diagram; a fourth calculating module, configured to calculate a target translation parameter of the first image relative to the second image based on the first feature subset and the second feature subset.
Optionally, the first extracting module includes: the first extraction submodule is used for extracting all boundary pixel points of each target object in the first image to obtain a first edge image; the first reverse color submodule is used for performing reverse color processing on the first edge image to obtain a second edge image; the second extraction submodule is used for extracting the contour of the first edge image to obtain a first contour array, and calculating the pixel point direction corresponding to each pixel point based on the first contour array to obtain a first contour direction array; the first transformation submodule is used for carrying out preset distance transformation processing on the second edge image based on a first preset distance threshold value to obtain a first distance image; a first calculating submodule, configured to calculate, based on the first contour direction array, a first boundary directional diagram corresponding to each target object boundary in the second edge image; a first determining sub-module for determining a first feature subset based on the first range image and the first boundary directional diagram.
Optionally, the first transformation submodule includes: a second determining submodule, configured to determine first mask information based on the first preset distance threshold, where the first mask information is used to mask part of edge information in the second image; an adding submodule, configured to add the first mask information to the first feature subset.
Optionally, the second extraction module includes: the second extraction submodule is used for extracting all boundary pixel points of each target object in the second image to obtain a third edge image; a deletion submodule configured to perform deletion processing on the contour in the third edge image using the first mask information; the second inverse color submodule is used for performing inverse color processing on the deleted third edge image to obtain a fourth edge image; the second calculation submodule is used for extracting the contour of the fourth edge image to obtain a second contour array, and calculating the pixel point direction corresponding to each pixel point based on the second contour array to obtain a second contour direction array; the second transformation submodule is used for carrying out preset distance transformation processing on the fourth edge image based on a second preset distance threshold value to obtain a second distance image; a third calculation submodule, configured to calculate, based on the second contour direction array, a second boundary directional diagram corresponding to each target object boundary in the fourth edge image; and the third determining submodule is used for obtaining a second feature subset based on the second distance image and the second boundary directional diagram.
Optionally, the fourth calculating module includes: the third extraction submodule is used for extracting contour pixel points with the pixel distance smaller than a first distance threshold value in the first distance image and the second distance image by adopting a first judgment condition to obtain a first contour pixel point set participating in image matching; a fourth extraction submodule, configured to extract contour pixel points, of which pixel distances are smaller than a second distance threshold, in the first boundary directional diagram and the second boundary directional diagram by using a second judgment condition, so as to obtain a second contour pixel point set participating in image matching; a fifth determining submodule, configured to determine a chamfer distance score, a direction diagram distance, and an image adjustment factor between the first image and the second image based on the first contour pixel point set and the second contour pixel point set, where the image adjustment factor is used to adjust the chamfer distance score and the direction diagram distance proportion; a fourth calculation submodule configured to slide the second image on the first image, and input the chamfer distance score, the direction map distance, and the image adjustment factor to a first preset formula to calculate an image slide score; a sixth determining submodule, configured to determine a target sliding position corresponding to a minimum score among all the image sliding scores; and the seventh determining submodule is used for determining the target translation parameter based on the target sliding position.
Optionally, the first correction module includes: a first obtaining sub-module, configured to obtain an alignment precision value applied by a terminal, and determine a plurality of correction resolutions based on the alignment precision value and a resolution of the base image pair, where the plurality of correction resolutions at least include: a preset resolution, which is the minimum resolution among the plurality of correction resolutions; and the first correction submodule is used for scaling the basic image pair to the preset resolution ratio and carrying out pyramid correction processing until the alignment precision value is met to obtain the plurality of correction parameters.
Optionally, the image correction apparatus further includes: the determining unit is used for determining the image alignment requirement precision of the terminal application under the target image resolution; a first determining unit, configured to execute step S1, to determine whether a current alignment accuracy image corresponding to the target image pair meets the image alignment requirement accuracy at the first image resolution; a first adjusting unit, configured to perform step S2, and if it is determined that the current alignment accuracy image corresponding to the target image pair does not reach the required image alignment accuracy, adjust the image resolution to a second image resolution, where a resolution value of the second image resolution is higher than the first image resolution; a first execution unit, configured to execute step S3, where the step of performing correction processing on the basic image pair by using a preset correction mode to obtain a plurality of correction parameters is executed; a second execution unit configured to execute step S4, in which the step of performing alignment correction on the base image pair based on each of the correction parameters to obtain a target image pair is executed; the steps S1 to S4 are repeatedly executed until the current alignment accuracy image reaches the above-mentioned image alignment required accuracy.
Optionally, the image correction apparatus further includes: a comparison unit, configured to compare an image resolution of the visible light image with an image resolution of the depth image, and obtain a comparison result with a minimum resolution; a calculation unit configured to calculate a correction number threshold based on the image resolution obtained as a result of the comparison and a correction processing maximum resolution initially set; and a stopping unit for stopping the correction processing if the image correction times reaches the correction times threshold in the alignment correction process.
According to another aspect of the embodiments of the present invention, there is also provided an image correction system including: a first image capturing means for taking a visible light image of a target object; a second image capturing means for taking a depth image of the target object; the device comprises a correcting device, a first image acquiring device and a second image acquiring device, wherein the correcting device is used for acquiring a visible light image and a depth image which are shot for a target object, and forming a basic image pair after the visible light image and the depth image are changed, wherein the basic image pair comprises a first image and a second image; correcting the basic image pair by adopting a preset correction mode to obtain a plurality of correction parameters; performing alignment correction on the basic image pair based on each correction parameter to obtain a target image pair; and the result output device is used for outputting the aligned target image pair to a preset terminal display interface.
According to another aspect of the embodiments of the present invention, there is also provided an electronic device, including: a processor; and a memory for storing executable instructions of the processor; wherein the processor is configured to execute any one of the image correction methods via execution of the executable instructions.
According to another aspect of the embodiments of the present invention, there is also provided a computer-readable storage medium, where the computer-readable storage medium includes a stored computer program, and when the computer program runs, the apparatus where the computer-readable storage medium is located is controlled to execute any one of the above image correction methods.
In the embodiment of the invention, a visible light image and a depth image which are shot for a target object are obtained, a basic image pair is formed after transformation, wherein the basic image pair comprises a first image and a second image, the basic image pair is corrected by adopting a preset correction mode to obtain a plurality of correction parameters, and the basic image pair is aligned and corrected based on each correction parameter to obtain the target image pair. In the embodiment, the alignment operation can be performed on the images shot by various cameras, dynamic correction is realized, the correction environment is simple, and the alignment correction can be completed by using the images shot by the equipment, so that the technical problems that the dynamic correction between two different cameras cannot be realized, the adaptation environment is low, the effect of aligning the images is poor, and the use interest of a user is easily influenced in the related technology are solved.
Drawings
The accompanying drawings, which are included to provide a further understanding of the invention and are incorporated in and constitute a part of this application, illustrate embodiment(s) of the invention and together with the description serve to explain the invention without limiting the invention. In the drawings:
FIG. 1 is a flow chart of an alternative image correction method according to an embodiment of the present invention;
FIG. 2 is a schematic diagram of an alternative preliminary alignment depth map in accordance with embodiments of the present invention;
FIG. 3 is an alternative overlay image of a registered depth image and a visible image in accordance with embodiments of the present invention;
fig. 4 is a schematic diagram of an alternative image correction apparatus according to an embodiment of the present invention.
Detailed Description
In order to make the technical solutions of the present invention better understood, the technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are only a part of the embodiments of the present invention, and not all of the embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
It should be noted that the terms "first," "second," and the like in the description and claims of the present invention and in the drawings described above are used for distinguishing between similar elements and not necessarily for describing a particular sequential or chronological order. It is to be understood that the data so used is interchangeable under appropriate circumstances such that the embodiments of the invention described herein are capable of operation in sequences other than those illustrated or described herein. Furthermore, the terms "comprises," "comprising," and "having," and any variations thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of steps or elements is not necessarily limited to those steps or elements expressly listed, but may include other steps or elements not expressly listed or inherent to such process, method, article, or apparatus.
In order to facilitate understanding of the present invention by those skilled in the art, some terms involved in the embodiments of the present invention are explained below:
RGB, Red Green Blue, a color standard, also referred to herein as a typical color image;
RGB-D, Red Green Blue-Depth, color-Depth map;
ToF, Time of flight;
OIS, Optical Image Stabilization, Optical Image anti-shake;
AF, Automatic Focus, auto Focus;
FF, Fixed-focus;
VGA resolution ratio: 640 x 480 resolution.
Scenarios in which the following embodiments of the present invention may be applied include, but are not limited to: three-dimensional reconstruction, unmanned driving, face recognition, object measurement, 3D modeling, background blurring, visible light image and infrared light image fusion, visible light image and depth image fusion, VR glasses, vehicle-mounted imaging equipment and the like. In order to deal with image differences brought by complex shooting environments and efficiently finish image alignment, alignment parameters are calculated according to images acquired by a plurality of cameras, the method is suitable for image capturing equipment with a depth camera, wherein the image capturing equipment can only provide visible light images and infrared images, or only provide visible light images and depth images, or can provide visible light images, infrared light images and depth images, and the type of the depth camera is not required, and the image capturing equipment can be a ToF camera, an infrared camera, a structured light camera and/or a binocular depth camera.
The method has simple environment correction, does not need specific environment and specific shot patterns, and can realize dynamic correction only by preliminarily aligning the visible light image and the depth image according to preset calibration parameters. For camera equipment which does not use OIS, the invention can realize image alignment only by carrying out dynamic alignment correction once at intervals; in addition, for the correction processing with high requirement on alignment precision but low requirement on real-time property, the method can select to carry out alignment correction processing on high resolution, and can be subsequently used for 3D modeling, background blurring, fusion of visible light images and infrared light images and the like.
The invention can also be used for detecting and matching some objects, and common applications include gesture detection and pedestrian detection. The calibration error of the mobile phone camera caused by external force factors such as falling can be automatically calibrated or calibrated after sale at regular intervals by using the method; for VR glasses and vehicle-mounted imaging equipment, alignment errors between a visible light image and a depth image caused by vibration can be corrected by using the method. The invention is illustrated below with reference to various examples.
Example one
In accordance with an embodiment of the present invention, there is provided an image correction method embodiment, it is noted that the steps illustrated in the flowchart of the drawings may be performed in a computer system such as a set of computer-executable instructions and that, although a logical order is illustrated in the flowchart, in some cases, the steps illustrated or described may be performed in an order different than here.
The embodiment of the invention provides an image correction method, which can be applied to an image correction system, wherein the image correction system comprises: a first image capture device capable of capturing a depth image/infrared image (an embodiment of the invention is schematically illustrated as a depth camera) and a second image capture device capable of capturing a visible light image (an embodiment of the invention is schematically illustrated as a visible light camera). In the OIS mechanism, the internal reference and the external reference of the cameras are changed due to the fact that the frames of the AF mechanism, the equipment fall and the image capturing device are not synchronous or the frame rates of the AF mechanism, the equipment fall and the image capturing device are different, and errors are generated when images of the two cameras are aligned by using preset calibration parameters, which mainly shows that the problems of translation and scaling exist in the primary alignment image pair, and the problems are unacceptable for image processing technologies such as three-dimensional reconstruction and background blurring with strict precision requirements. Therefore, when the relative position of the image capturing device or the parameters of the image capturing device change, the visible light image and the depth image which are aligned by using the preset calibration parameters need to be further corrected, and the alignment error is reduced.
The embodiment of the invention can improve the practicability of image alignment and carry out alignment operation on images shot by various cameras. The calibration parameters can be calculated according to the visible light image and the infrared image, and can also be calculated according to the visible light image and the depth image, so that the method is suitable for various devices carrying the ToF depth camera or the structured light depth camera, generally, the texture difference between the acquired image and the visible light image is large, and a common key point matching scheme is not feasible, but the technical scheme provided by the embodiment of the invention can still obtain a more accurate alignment effect.
Fig. 1 is a flow chart of an alternative image correction method according to an embodiment of the present invention, as shown in fig. 1, the method comprising the steps of:
step S102, acquiring a visible light image and a depth image shot for a target object, and forming a basic image pair after transformation, wherein the basic image pair comprises a first image and a second image;
step S104, correcting the basic image pair by adopting a preset correction mode to obtain a plurality of correction parameters;
and step S106, carrying out alignment correction on the basic image pair based on each correction parameter to obtain a target image pair.
Through the steps, the visible light image and the depth image shot for the target object can be obtained, a basic image pair is formed after transformation, wherein the basic image pair comprises a first image and a second image, the basic image pair is corrected by adopting a preset correction mode to obtain a plurality of correction parameters, and the basic image pair is aligned and corrected based on each correction parameter to obtain the target image pair. In the embodiment, the alignment operation can be performed on the images shot by various cameras, dynamic correction is realized, the correction environment is simple, and the alignment correction can be completed by using the images shot by the equipment, so that the technical problems that the dynamic correction between two different cameras cannot be realized in the related technology, the adaptive environment is low, the effect of aligning the images is poor, and the use interest of a user is easily influenced are solved.
The following is a detailed description of the above embodiments.
Step S102, a visible light image and a depth image shot for a target object are obtained, and a basic image pair is formed after transformation, wherein the basic image pair comprises a first image and a second image.
An image capture device used by an embodiment of the present invention may include: the depth camera can obtain a depth image and a corresponding infrared image, and the visible light camera obtains a visible light image. In the embodiment of the invention, the depth camera does not need to obtain the depth image and the infrared image at the same time, and the embodiment of the invention is not only suitable for equipment capable of providing the infrared image and the depth image, but also suitable for equipment capable of only providing the depth image or the infrared image.
As an optional embodiment of the present invention, acquiring a visible light image and a depth image taken of a target object, and forming a base image pair after transformation includes: based on preset calibration parameters, the depth image is converted into an image coordinate system of the visible light image, and a preliminary alignment depth map with the same resolution as the visible light image is obtained through adjustment, wherein the visible light image and the preliminary alignment depth map are combined to form a basic image pair, the first image is the visible light image, and the second image is the preliminary alignment depth map.
The preset calibration parameters are parameters determined during initial calibration based on the depth camera and the visible light camera, for example, factory calibration parameter results. Generally, the resolution of the depth map is smaller than that of the visible light map, and after the depth map is converted into a visible light image coordinate system, the depth map can be adjusted through a traditional interpolation algorithm or a depth learning hyper-resolution model, so that a preliminary alignment depth map with the same resolution as that of the visible light image can be obtained, and subsequent image alignment correction processing is facilitated.
Optionally, before the base image pair is corrected by using a preset correction mode to obtain a plurality of correction parameters, the image correction method further includes: preprocessing the preliminary alignment depth map in the basic image pair to obtain a first image; and filtering the visible light image in the basic image pair to obtain a second image.
Optionally, the step of preprocessing the preliminary aligned depth map in the base image pair to obtain a first image includes: mapping the depth value of each pixel point in the preliminary alignment depth map in the basic image pair to a preset pixel range; and/or adjusting the image contrast of the preliminary alignment depth map to obtain the first image.
In the embodiment of the invention, a visible light image and a depth image are respectively obtained from a visible light camera and a depth camera, and the depth image is converted into a visible light image coordinate system according to preset calibration parameters to obtain a preliminary alignment depth image with the same resolution as the visible light image. Preprocessing the preliminary alignment depth map, mapping the depth value of each pixel point of the preliminary alignment depth map to a preset pixel range [0, 255], adjusting image contrast to recover lost details aiming at the existing over-exposure problem, strengthening weakened texture information caused by the over-exposure so as to more effectively perform subsequent alignment correction processing, and marking the preprocessed preliminary alignment depth map as a first image or a template image. And carrying out filtering processing on the other visible light image, wherein the filtering can help to remove high-frequency noise signals generated by sampling when the resolution of the visible light image is changed, and recording the obtained image as a second image or a matching image.
In practical applications, such as three-dimensional reconstruction, it is usually necessary to align the visible light image and the depth image, and the smaller the alignment error is, the more beneficial the effect of the subsequent algorithm is. It is an aim of embodiments of the present invention to reduce the error between a visible light image and an initial alignment depth map. In the embodiment of the invention, the correction parameters can be calculated according to the visible light image and the depth map, and the correction parameters can also be calculated according to the visible light image and the infrared image. In the embodiment of the invention, the implementation process is described by taking the calculation of the correction parameters of the visible light image and the depth image as an example, and the implementation process is also suitable for the calculation of the correction parameters of the visible light image and the infrared image.
Fig. 2 is a schematic diagram of an alternative preliminary alignment depth map according to an embodiment of the present invention, as shown in fig. 2, which contains the contour and edge information of the extracted main object.
Fig. 3 is an alternative superimposed image of the depth image and the visible light image after alignment according to the embodiment of the present invention, as shown in fig. 3, after the visible light image is added, the visible light image needs to be aligned with the portion of the preliminary alignment depth image where the object is shifted, and the further aligned image in fig. 3 has very small difference in shift, and can basically achieve pixel-by-pixel alignment.
The image correction scheme or the dynamic correction scheme related in the embodiment of the invention is suitable for calibrated equipment, namely initial camera internal reference and external reference are known. Because factors such as an OIS mechanism, an AF mechanism and equipment falling can affect internal parameters of a camera and external parameters between cameras, alignment errors can exist between a depth map aligned by using known calibration parameters and a visible light map, mainly representing that the preliminary alignment image pair has translation and scaling problems.
And step S104, correcting the basic image pair by adopting a preset correction mode to obtain a plurality of correction parameters.
As an optional embodiment of the present invention, the performing a correction process on the base image pair by using a preset correction mode to obtain a plurality of correction parameters includes: determining a target translation parameter and a target scaling factor between the first image and the second image; based on the target translation parameter and the target scaling factor, a plurality of correction parameters are determined.
Optionally, the step of determining a target translation parameter and a target scaling factor between the first image and the second image includes: extracting image features of the first image to obtain a first feature subset, wherein the first feature subset comprises a first distance image, a first boundary directional diagram and first mask information; extracting image features of the second image to obtain a second feature subset, wherein the second feature subset comprises a second distance image and a second boundary directional diagram; based on the first subset of features and the second subset of features, a target translation parameter of the first image relative to the second image is calculated.
Feature extraction is respectively performed on the preprocessed first image and the preprocessed second image.
First, in the embodiment of the present invention, an image feature of a first image is extracted to obtain a first feature subset.
Optionally, the step of extracting the image feature of the first image to obtain the first feature subset includes: extracting all boundary pixel points of each target object in the first image to obtain a first edge image; carrying out reverse color processing on the first edge image to obtain a second edge image; extracting a contour from the first edge image to obtain a first contour array, and calculating a pixel point direction corresponding to each pixel point based on the first contour array to obtain a first contour direction array; performing preset distance transformation processing on the second edge image based on the first preset distance threshold value to obtain a first distance image; calculating a first boundary directional diagram corresponding to each target object boundary in the second edge image based on the first contour direction array; a first subset of features is determined based on the first range image and the first boundary orientation map.
When the image features of the first image are extracted, all boundary pixel points of each target object in the first image are extracted to obtain a first edge image, namely extracting a main edge in the first image, wherein the edge refers to the boundary pixel points of at least one object in the image.
And extracting the contour of the first edge image to obtain a first contour array. The first contour array records the contour dividing information and the position relation of each contour containing pixel by a multi-dimensional array, so that the pixel point direction corresponding to each pixel point can be calculated based on the first contour array to obtain a first contour direction array. Calculating the first boundary directional diagram corresponding to each target object boundary in the second edge image based on the first contour direction array may be: the first contour direction array records contour dividing information and direction values of pixel points contained in each contour, the information contained in the first contour direction array is mapped to a second image edge image, the direction corresponding to each target object boundary in the second image edge image is calculated, and the direction is stored as a first boundary directional diagram.
In the embodiment of the present invention, the step of performing preset distance transformation processing on the second edge image based on the first preset distance threshold to obtain the first distance image includes: determining first mask information based on a first preset distance threshold, wherein the first mask information is used for shielding partial edge information in a second image; first mask information is added to the first subset of features.
Based on the first preset distance threshold, distance transform (distance transform) of the Euclidean distance is carried out on the second edge image to obtain a first distance image, meanwhile, a mask for removing redundant edge information in the second image/query image subsequently can be generated, and the mask information is used for carrying out region screening and shielding on the second image or the matched image to enable the second image or the matched image not to participate in processing. By processing the second image with a complex contour using the mask information obtained by the simple contour image distance transformation, redundant contours can be removed, and the amount of calculation for image processing is reduced.
For example, extracting the first image/template image feature includes:
step1: extracting main edges of the first image, and recording the obtained edge image as C1, wherein the gray value of the edge pixel position is 255, and the gray value of the non-edge pixel position is 0;
and 2, step: carrying out reverse color processing on the edge image C1 to obtain an edge image E1, wherein the gray value of the edge pixel position is 0, and the gray value of the non-edge pixel position is 255;
and step3: extracting the contour in C1 and recording as V1;
and 4, step 4: based on a preset distance threshold, performing Euclidean distance transformation (distance transform) on the edge image E1 to obtain a distance image DT1 and a mask for subsequently removing redundant edge information in the second image/query image;
step 5: an edge directional diagram OM1 in the edge image E1 is calculated. Firstly, calculating the direction of a pixel point on the contour V1, and recording the direction as O1; the direction of the contour is calculated based on the mapping of O1 to the edge image E1 and saved as a directional diagram OM 1.
Then, the embodiment of the present invention extracts the image feature of the second image to obtain a second feature subset.
Optionally, the step of extracting image features of the second image to obtain a second feature subset includes: extracting all boundary pixel points of each target object in the second image to obtain a third edge image; deleting the outline in the third edge image by adopting the first mask information; performing reverse color processing on the third edge image subjected to the deleting processing to obtain a fourth edge image; extracting the contour of the fourth edge image to obtain a second contour array, and calculating the pixel point direction corresponding to each pixel point based on the second contour array to obtain a second contour direction array; performing preset distance transformation processing on the fourth edge image based on a second preset distance threshold value to obtain a second distance image; calculating a second boundary directional diagram corresponding to each target object boundary in the fourth edge image based on the second contour direction array; a second feature subset is derived based on the second range image and the second boundary directional diagram.
And extracting the main edge of each object in the second image to obtain a third edge image, deleting the edge in the third edge image by using mask information, and performing reverse color processing on the processed edge image to obtain a fourth edge image or a main edge image. By processing the second image with a complex contour using the mask information obtained by the simple contour image distance transformation, redundant contours can be removed, and the amount of calculation for image processing is reduced.
For example, extracting the second image/matching image features includes:
step1: extracting main edges of the image, and recording the obtained edge image as C2;
step2: utilizing the mask information obtained by the calculation to delete the edge in the C2, and then carrying out reverse color processing on the processed edge image to obtain an image main edge image E2, wherein the gray value of the edge pixel position is 0, and the gray value of the non-edge pixel position is 255;
step3: extracting the contour in the image C2 and recording the contour as V2;
step 4: performing distance transform (distance transform) of euclidean distance on the edge image E2 to obtain a distance image DT 2; the distance threshold is set, resulting in DT2 actually participating in the calculation.
Step 5: an edge directional diagram OM2 in the edge image E2 is calculated. Firstly, calculating the direction of a pixel point on the contour V2, and recording the direction as O2; the direction of the contour is calculated based on the mapping of O2 to the edge image E2 and saved as a directional diagram OM 2.
After the feature extraction operations of the first image and the second image are completed, an image target translation parameter and a target scaling factor may be calculated.
Optionally, the step of calculating a target translation parameter of the first image relative to the second image based on the first feature subset and the second feature subset includes: extracting contour pixel points with the pixel distance smaller than a first distance threshold value in the first distance image and the second distance image by adopting a first judgment condition to obtain a first contour pixel point set participating in image matching; extracting contour pixel points with the pixel distance smaller than a second distance threshold value in the first boundary directional diagram and the second boundary directional diagram by adopting a second judgment condition to obtain a second contour pixel point set participating in image matching; determining a chamfer distance score, a direction diagram distance and an image adjusting factor between the first image and the second image based on the first contour pixel point set and the second contour pixel point set, wherein the image adjusting factor is used for adjusting the chamfer distance score and the direction diagram distance proportion; sliding the second image on the first image, inputting the chamfer distance score, the direction graph distance and the image adjusting factor into a first preset formula, and calculating an image sliding score; determining a target sliding position corresponding to the minimum score in all image sliding scores; based on the target slide position, a target translation parameter is determined.
For example, when calculating the target translation parameter, the following steps are included:
step1, extracting the position of the contour pixel point participating in image matching, and judging the conditions as follows:
| DT1(i, j) -DT2(i, j) | < th1 and | | OM1(i, j) -OM2(i, j) | < th 2;
wherein th1 and th2 are two preset distance thresholds, where | | DT1(i, j) -DT2(i, j) | | is a pixel distance of a certain pixel point in the first distance image and the second distance image, and | | | OM1(i, j) -OM2(i, j) | | is a pixel distance of a certain pixel point in the first boundary directional diagram and the second boundary directional diagram.
The pixel position which does not meet the condition does not participate in the calculation of the chamfer distance score, and the calculation amount is reduced.
Step 2. calculate the chamfer distance score. Calculating each sliding position by sliding the first image on the second image to obtain an image sliding score, wherein the first preset formula is as follows:
score ═ EDT + s ═ ODT; wherein score is an image sliding score of a certain sliding position, EDT is a chamfer distance of the first distance image and the second distance image; the ODT is the distance between the first boundary directional diagram and the second boundary directional diagram and the direction diagram distance; s is an image adjustment factor that adjusts the specific gravity of the EDT and ODT.
Step3, selecting the translation values in the x and y directions. And determining a target sliding position corresponding to the minimum score in all the image sliding scores, wherein the translation amount corresponding to the position is a target translation parameter required by alignment, namely x and y direction translation values dx and dy.
In the embodiment of the invention, when the basic image pair is corrected, the sequential execution sequence of zooming and translation is not limited, the translation correction value and the zoom correction value can be calculated firstly, then the alignment is realized based on the correction value, or the translation correction value can be calculated firstly and the translation correction is completed, and then the zooming correction is realized on the basis.
Several embodiments of determining the target translation parameter and the target scaling factor between the first image and the second image, respectively, are described below.
First, the step of determining a target translation parameter and a target scaling factor between a first image and a second image comprises: calculating a target translation parameter of the first image relative to the second image, and translating the first image based on the target translation parameter to obtain a third image; selecting a plurality of scaling coefficients, scaling the third image by each scaling coefficient respectively, and calculating an image matching score between the third image and the second image; and taking the scaling coefficient corresponding to the minimum score in the image matching scores as the target scaling coefficient.
Firstly, determining a target translation parameter, and translating the first image by adopting the target translation parameter to obtain a third image; and then carrying out scaling processing on the third image, adjusting a scaling coefficient, calculating an image matching score between the third image and the second image under the scaling coefficient, and carrying out scaling processing on the third image by adopting the target scaling coefficient through selecting the scaling coefficient corresponding to the minimum image matching score as the target scaling coefficient so as to realize the correction processing between the two images.
The target scaling factors include, but are not limited to: a scaling width coefficient, a scaling length coefficient, a scaling factor, etc.
Secondly, the step of determining a target translation parameter and a target scaling factor between the first image and the second image comprises: calculating a target translation parameter of the first image relative to the second image, and translating the first image based on the target translation parameter to obtain a fourth image; selecting a plurality of scaling coefficients, scaling the fourth image by each scaling coefficient respectively, and calculating an image matching score between the fourth image and the second image; and based on the image matching score, adjusting the scaling coefficient until the score change in the image matching score is smaller than a first threshold value, and taking the scaling coefficient corresponding to the image matching score as a target scaling coefficient.
Firstly determining translation parameters, and adopting the target translation parameters to translate the first image to obtain a fourth image; and then, carrying out scaling processing on the fourth image, adjusting a scaling coefficient until the score change in the image matching score between the fourth image and the second image under the scaling coefficient is smaller than a first threshold value, and taking the scaling coefficient corresponding to the image matching score as a target scaling coefficient. And scaling the first image by adopting the target scaling coefficient to realize the correction processing between the two images.
Thirdly, the step of determining a target translation parameter and a target scaling factor between the first image and the second image comprises: selecting a plurality of scaling coefficients, and scaling the first image by each scaling coefficient; sliding the first image after being scaled based on each scaling factor on the second image, and calculating the score of image matching between the first image and the second image; and taking the scaling coefficient and the translation amount corresponding to the minimum score in the image matching scores as a target scaling coefficient and a target translation parameter.
When the scaling coefficient is selected, the first image scaled based on each scaling coefficient may be slid on the second image, the score of image matching between the first image and the second image is calculated, and the scaling coefficient and the translation amount corresponding to the minimum score in the image matching score are used as the target scaling coefficient and the target translation parameter, so as to implement the correction processing between the two images.
And step S106, carrying out alignment correction on the basic image pair based on each correction parameter to obtain a target image pair.
Specifically, according to the plurality of correction parameters obtained in the above steps, including the target translation parameter and the target scaling factor, the alignment correction is performed on the base image pair, so as to obtain a target image pair meeting the alignment requirement.
In the embodiment of the present invention, when the basic image pair is corrected by using the preset correction mode to obtain the plurality of correction parameters, the basic image pair may be scaled to the preset resolution, and subjected to the pyramid correction to obtain the plurality of correction parameters.
Although the chamfer distance matching is performed on the basis of image edge feature extraction, the edge pixel positions still need to be traversed, and the computation amount is huge. If the images to be aligned have high resolution and complex edge information, the real-time performance of image alignment is further affected, so that when the alignment correction is performed on the high-resolution basic image pair, the images need to be aligned from coarse to fine by using a multi-resolution dynamic correction method.
In the embodiment of the invention, in the process of correcting the image, the dynamic correction is carried out by firstly sampling to the low resolution and then sampling layer by layer, and the mode of doing so is a pyramid algorithm, so that the calculation time can be reduced, the running time on the lowest resolution is minimum, a preliminary rough result is found, fine adjustment calculation is carried out according to the result on the lowest resolution, and the whole recalculation is not needed. If the precision requirement is low, the alignment precision requirement can be met by performing correction processing on low resolution; and if the precision requirement is high and the precision requirement cannot be met by performing correction processing on the low resolution, performing up-sampling on the high resolution to perform correction until the alignment precision requirement is met.
As an optional embodiment of the present invention, after performing alignment correction on the base image pair based on each correction parameter to obtain a target image pair, the image correction method further includes: determining the image alignment requirement precision of the terminal application under the target image resolution; step S1, judging whether the current alignment precision image corresponding to the target image pair reaches the required precision of image alignment under the first image resolution; step S2, if it is determined that the current alignment precision image corresponding to the target image pair does not reach the required precision of image alignment, adjusting the image resolution to be a second image resolution, wherein the resolution value of the second image resolution is higher than the first image resolution; step S3, executing the step of correcting the basic image pair by adopting a preset correction mode to obtain a plurality of correction parameters; step S4, performing alignment correction on the base image pair based on each correction parameter to obtain a target image pair; the steps S1 to S4 are repeatedly executed until the current alignment precision image is finished when reaching the image alignment required precision.
For example, a visible light image and a depth map image are obtained from the visible light camera and the depth camera respectively, the depth map is transformed into a visible light image coordinate system according to preset calibration parameters, and the initial alignment depth map with the same resolution as the visible light image is obtained through adjustment. Scaling the preliminarily aligned image pair P1 to a low resolution P for dynamic correction, obtaining correction parameters (dx _1, dy _1, scale _1), correcting the input image pair to obtain a new image pair, and recording the new image pair as an aligned image pair P2; dynamically correcting the corrected aligned image pair under the condition of resolution P × s2(s is an amplification factor, generally 2), obtaining correction parameters (dx _2, dy _2, scale _2), correcting the input image pair P2 to obtain a new image pair, and recording the new image pair as an aligned image pair P3; and continuously improving the resolution used in the correction process, and repeating the dynamic correction process until the alignment precision required by the application is met.
According to the embodiment of the invention, the accuracy of the image correction process is improved, and in various scenes, the error can be corrected to be within 4 pixels from 30 pixels under VGA resolution, so that the alignment accuracy is very high.
Optionally, the image correction method further includes: comparing the image resolution of the visible light image with the image resolution of the depth image to obtain a comparison result with the minimum resolution; calculating a correction frequency threshold value based on the image resolution obtained by the comparison result and the initially set maximum correction processing resolution; in the alignment correction process, if the number of image corrections reaches a correction number threshold, the correction process is stopped.
The resolution of the initial alignment and the number of dynamic corrections required may be determined based on the resolution of the input image. For example, the image resolution between the two images (depth image and visible light image) with the smaller resolution is Tw Th, and the visible light image resolution is usually much greater than the depth map resolution. The maximum resolution for initial alignment is set to Mw × Mh (usually Mw is 320), and the calculation formula for the number of dynamic corrections t is:
Figure BDA0002861165940000191
wherein Sa ═ Tw/Mw, wherein,
in the formula
Figure BDA0002861165940000192
Represents rounding up; s represents the single magnification, usually 2. Obtaining an initial value m from the dynamic correction times t0=Tw/st-1(ii) a The entire alignment process resolution pyramid can be represented by the following formula:
mn=Tw/st-1-n,n=0,1,2...t-1;
nn=Th/st-1-n,n=0,1,2...t-1。
if the result of a certain low resolution (upsampled t 'times) meets the precision requirement, the upsampling is not continued, t is the maximum dynamic correction time, the optimization process can be terminated in advance according to actual needs, and the sampling time t' < ═ t.
With the above embodiment, the correction parameters (or the alignment parameters) can be calculated according to the visible light image and the depth map (or the visible light image and the infrared image), and the method is not only suitable for the device with the depth camera, which can only provide the visible light image and the infrared image, or only provide the visible light image and the depth image, or can provide three types of visible light image, infrared light image and depth image; the method can also be applied to the condition that the picture contents acquired by the two cameras are similar but the texture difference is large and the condition that the matching can not be carried out according to the characteristic points; meanwhile, the embodiment of the invention can also improve the situation of alignment error of images acquired by binocular equipment caused by OIS, falling, frame asynchronism, frame rate difference and the like of the camera, and the correction environment is very simple, and the image correction processing can be quickly finished without specific environment and specific shot patterns, so that the images satisfying the user can be obtained.
The invention is described below in connection with an alternative embodiment.
Example two
An embodiment of the present invention provides an image correction apparatus, which includes a plurality of implementation units corresponding to the implementation steps in the first embodiment.
Fig. 4 is a schematic diagram of an alternative image correction apparatus according to an embodiment of the present invention, as shown in fig. 4, the image correction apparatus may include: an acquisition unit 41, a first correction unit 43, a second correction unit 45, wherein,
an acquiring unit 41, configured to acquire a visible light image and a depth image captured of a target object, and form a basic image pair after transformation, where the basic image pair includes a first image and a second image;
a first correction unit 43, configured to perform correction processing on the basic image pair by using a preset correction mode to obtain a plurality of correction parameters;
and a second correction unit 45 for performing alignment correction on the base image pair based on each correction parameter to obtain a target image pair.
The image correction device may acquire the visible light image and the depth image captured by the target object through the acquisition unit 41, transform the images to form a basic image pair, where the basic image pair includes a first image and a second image, perform correction processing on the basic image pair through the first correction unit 45 in a preset correction mode to obtain a plurality of correction parameters, and perform alignment correction on the basic image pair through the second correction unit 47 based on each correction parameter to obtain the target image pair. In the embodiment, the alignment operation can be performed on the images shot by various cameras, dynamic correction is realized, the correction environment is simple, and the alignment correction can be completed by using the images shot by the equipment, so that the technical problems that the dynamic correction between two different cameras cannot be realized in the related technology, the adaptive environment is low, the effect of aligning the images is poor, and the use interest of a user is easily influenced are solved.
Optionally, the first correction unit includes: and the first correction module is used for scaling the basic image pair to a preset resolution ratio and carrying out pyramid correction processing to obtain a plurality of correction parameters.
Optionally, the obtaining unit includes: the first transformation module is used for transforming the depth image to an image coordinate system of the visible light image based on preset calibration parameters, and adjusting to obtain a preliminary alignment depth map with the same resolution as the visible light image, wherein the visible light image and the preliminary alignment depth map are combined to form a basic image pair, the first image is the visible light image, and the second image is the preliminary alignment depth map.
Optionally, the first correction unit further includes: a first determining module for determining a target translation parameter and a target scaling factor between the first image and the second image; a second determination module to determine a plurality of correction parameters based on the target translation parameter and the target scaling factor.
Optionally, the image correction apparatus further includes: the first processing unit is used for preprocessing the preliminary alignment depth map in the basic image pair to obtain a first image before correcting the basic image pair by adopting a preset correction mode to obtain a plurality of correction parameters; and the second processing unit is used for carrying out filtering processing on the visible light image in the basic image pair to obtain a second image.
Optionally, the first determining module includes: the first calculation module is used for calculating a target translation parameter of the first image relative to the second image, and translating the first image based on the target translation parameter to obtain a third image; the first scaling module is used for selecting a plurality of scaling coefficients, scaling the third image by each scaling coefficient and calculating the image matching score between the third image and the second image; and the second determining module is used for taking the scaling coefficient corresponding to the minimum score in the image matching scores as the target scaling coefficient.
Optionally, the first determining module further includes: the second calculation module is used for calculating a target translation parameter of the first image relative to the second image, and translating the first image based on the target translation parameter to obtain a fourth image; the second scaling module is used for selecting a plurality of scaling coefficients, scaling the fourth image by each scaling coefficient and calculating the image matching score between the fourth image and the second image; and a third determining module, configured to adjust the scaling factor until a change in a score in the image matching score is smaller than a first threshold, where a scaling factor corresponding to the image matching score is used as a target scaling factor.
Optionally, the first determining module further includes: the third scaling module is used for selecting a plurality of scaling coefficients and scaling the first image by each scaling coefficient; a third calculation module, configured to slide the first image scaled based on each scaling factor on the second image, and calculate a score of image matching between the first image and the second image; and the fourth determining module is used for taking the scaling coefficient and the translation amount corresponding to the minimum score in the image matching scores as the target scaling coefficient and the target translation parameter.
Optionally, the first processing unit includes: the first mapping module is used for mapping the depth value of each pixel point in the preliminary alignment depth map in the basic image pair to a preset pixel range; and/or the first adjusting module is used for adjusting the image contrast of the preliminary alignment depth map to obtain the first image.
Optionally, the first determining module further includes: the first extraction module is used for extracting image features of a first image to obtain a first feature subset, wherein the first feature subset comprises a first distance image, a first boundary directional diagram and first mask information; the second extraction module is used for extracting image features of a second image to obtain a second feature subset, wherein the second feature subset comprises a second distance image and a second boundary directional diagram; and a fourth calculation module, configured to calculate a target translation parameter of the first image relative to the second image based on the first feature subset and the second feature subset.
Optionally, the first extraction module includes: the first extraction submodule is used for extracting all boundary pixel points of each target object in the first image to obtain a first edge image; the first reverse color submodule is used for performing reverse color processing on the first edge image to obtain a second edge image; the second extraction submodule is used for extracting the outline of the first edge image to obtain a first outline array, and calculating the pixel point direction corresponding to each pixel point based on the first outline array to obtain a first outline direction array; the first transformation submodule is used for carrying out preset distance transformation processing on the second edge image based on a first preset distance threshold value to obtain a first distance image; the first calculation submodule is used for calculating a first boundary directional diagram corresponding to each target object boundary in the second edge image based on the first contour direction array; a first determination submodule for determining a first subset of features based on the first range image and the first boundary directional diagram.
Optionally, the first transformation submodule includes: the second determining submodule is used for determining first mask information based on the first preset distance threshold, wherein the first mask information is used for shielding partial edge information in the second image; an adding submodule for adding the first mask information to the first feature subset.
Optionally, the second extraction module includes: the second extraction submodule is used for extracting all boundary pixel points of each target object in the second image to obtain a third edge image; the deleting submodule is used for deleting the outline in the third edge image by adopting the first mask information; the second inverse color submodule is used for performing inverse color processing on the deleted third edge image to obtain a fourth edge image; the second calculation submodule is used for extracting the contour of the fourth edge image to obtain a second contour array, and calculating the pixel point direction corresponding to each pixel point based on the second contour array to obtain a second contour direction array; the second transformation submodule is used for carrying out preset distance transformation processing on the fourth edge image based on a second preset distance threshold value to obtain a second distance image; the third calculation submodule is used for calculating a second boundary directional diagram corresponding to each target object boundary in the fourth edge image based on the second contour direction array; and the third determining submodule is used for obtaining a second feature subset based on the second distance image and the second boundary directional diagram.
Optionally, the fourth calculating module includes: the third extraction submodule is used for extracting contour pixel points with the pixel distance smaller than a first distance threshold value in the first distance image and the second distance image by adopting a first judgment condition to obtain a first contour pixel point set participating in image matching; the fourth extraction submodule is used for extracting contour pixel points of which the pixel distances between the first boundary directional diagram and the second boundary directional diagram are smaller than a second distance threshold value by adopting a second judgment condition to obtain a second contour pixel point set participating in image matching; the fifth determining submodule is used for determining a chamfer distance score, a direction diagram distance and an image adjusting factor between the first image and the second image based on the first contour pixel point set and the second contour pixel point set, wherein the image adjusting factor is used for adjusting the chamfer distance score and the directional diagram distance proportion; the fourth calculation submodule is used for sliding the second image on the first image, inputting the chamfer distance score, the direction graph distance and the image adjustment factor into the first preset formula and calculating the image sliding score; a sixth determining submodule, configured to determine a target sliding position corresponding to a minimum score among all the image sliding scores; and the seventh determining submodule is used for determining the target translation parameter based on the target sliding position.
Optionally, the first correction module includes: the first acquisition sub-module is used for acquiring an alignment precision value applied by a terminal, and determining a plurality of correction resolutions based on the alignment precision value and the resolution of the base image pair, wherein the plurality of correction resolutions at least comprise: presetting resolution, wherein the preset resolution is the minimum resolution in the plurality of correction resolutions; and the first correction submodule is used for scaling the basic image pair to a preset resolution ratio and carrying out pyramid correction processing until the alignment precision value is met to obtain a plurality of correction parameters.
Optionally, the image correction apparatus further includes: the determining unit is used for determining the image alignment requirement precision of the terminal application under the target image resolution; a first determining unit, configured to execute step S1, to determine whether the current alignment accuracy image corresponding to the target image pair meets the image alignment requirement accuracy at the first image resolution; a first adjusting unit, configured to execute step S2, and if it is determined that the current alignment accuracy image corresponding to the target image pair does not reach the required accuracy of image alignment, adjust the image resolution to be a second image resolution, where a resolution value of the second image resolution is higher than the first image resolution; a first execution unit, configured to execute step S3, where the step of performing correction processing on the base image pair by using a preset correction mode to obtain a plurality of correction parameters is executed; a second execution unit configured to execute step S4 of performing alignment correction on the base image pair based on each correction parameter to obtain a target image pair; the steps S1 to S4 are repeatedly executed until the current alignment precision image is finished when reaching the image alignment required precision.
Optionally, the image correction apparatus further includes: the comparison unit is used for comparing the image resolution of the visible light image with the image resolution of the depth image to obtain a comparison result with the minimum resolution; a calculation unit configured to calculate a correction number threshold based on the image resolution obtained as a result of the comparison and a correction processing maximum resolution initially set; and the stopping unit is used for stopping the correction processing if the image correction times reach the correction time threshold in the alignment correction process.
The image correction apparatus may further include a processor and a memory, the acquiring unit 41, the first correcting unit 43, the second correcting unit 45, and the like are stored in the memory as program units, and the processor executes the program units stored in the memory to implement corresponding functions.
The processor comprises a kernel, and the kernel calls a corresponding program unit from the memory. The kernel can be set to be one or more, and the kernel parameters are adjusted to perform alignment correction on the basic image pair based on each correction parameter, so as to obtain a target image pair.
The memory may include volatile memory in a computer readable medium, Random Access Memory (RAM) and/or nonvolatile memory such as Read Only Memory (ROM) or flash memory (flash RAM), and the memory includes at least one memory chip.
According to another aspect of the embodiments of the present invention, there is also provided an image correction system including: a first image capturing means for taking a visible light image of a target object; a second image capturing means for taking a depth image of the target object; the device comprises a correcting device, a processing device and a processing device, wherein the correcting device is used for converting a visible light image and a depth image which are shot for a target object to form a basic image pair, the basic image pair comprises a first image and a second image, correcting the basic image pair by adopting a preset correction mode to obtain a plurality of correction parameters, and performing alignment correction on the basic image pair based on each correction parameter to obtain a target image pair; and the result output device is used for outputting the aligned target image pair to a preset terminal display interface.
According to another aspect of the embodiments of the present invention, there is also provided an electronic device, including: a processor; and a memory for storing executable instructions of the processor; wherein the processor is configured to perform the image correction method of any one of the above via execution of the executable instructions.
According to another aspect of the embodiments of the present invention, there is also provided a computer-readable storage medium, which includes a stored computer program, wherein when the computer program runs, the apparatus where the computer-readable storage medium is located is controlled to execute any one of the image correction methods.
The present application further provides a computer program product adapted to perform a program for initializing the following method steps when executed on a data processing device: acquiring a visible light image and a depth image which are shot for a target object, and forming a basic image pair after transformation, wherein the basic image pair comprises a first image and a second image; correcting the basic image pair by adopting a preset correction mode to obtain a plurality of correction parameters; and carrying out alignment correction on the basic image pair based on each correction parameter to obtain a target image pair.
The above-mentioned serial numbers of the embodiments of the present invention are merely for description and do not represent the merits of the embodiments.
In the above embodiments of the present invention, the descriptions of the respective embodiments have respective emphasis, and for parts that are not described in detail in a certain embodiment, reference may be made to related descriptions of other embodiments.
In the embodiments provided in the present application, it should be understood that the disclosed technology can be implemented in other ways. The above-described embodiments of the apparatus are merely illustrative, and for example, the above-described division of the units may be a logical division, and in actual implementation, there may be another division, for example, multiple units or components may be combined or integrated into another coefficient, or some features may be omitted, or not executed. In addition, the shown or discussed coupling or direct coupling or communication connection between each other may be an indirect coupling or communication connection through some interfaces, units or modules, and may be electrical or in other forms.
The units described as separate parts may or may not be physically separate, and parts displayed as units may or may not be physical units, may be located in one position, or may be distributed on a plurality of units. Some or all of the units can be selected according to actual needs to achieve the purpose of the solution of the embodiment.
In addition, functional units in the embodiments of the present invention may be integrated into one processing unit, or each unit may exist alone physically, or two or more units are integrated into one unit. The integrated unit can be realized in a form of hardware, and can also be realized in a form of a software functional unit.
The integrated unit may be stored in a computer-readable storage medium if it is implemented in the form of a software functional unit and sold or used as a separate product. Based on such understanding, the technical solution of the present invention may be embodied in the form of a software product, which is stored in a storage medium and includes instructions for causing a computer device (which may be a personal computer, a server, or a network device) to execute all or part of the steps of the above methods according to the embodiments of the present invention. And the aforementioned storage medium includes: a U-disk, a Read-Only Memory (ROM), a Random Access Memory (RAM), a removable hard disk, a magnetic or optical disk, and other various media capable of storing program codes.
The foregoing is only a preferred embodiment of the present invention, and it should be noted that, for those skilled in the art, various modifications and decorations can be made without departing from the principle of the present invention, and these modifications and decorations should also be regarded as the protection scope of the present invention.

Claims (21)

1. An image correction method, characterized by comprising:
acquiring a visible light image and a depth image which are shot for a target object, and forming a basic image pair after transformation, wherein the basic image pair comprises a first image and a second image;
correcting the basic image pair by adopting a preset correction mode to obtain a plurality of correction parameters;
and carrying out alignment correction on the basic image pair based on each correction parameter to obtain a target image pair.
2. The image correction method according to claim 1, wherein the step of performing correction processing on the base image pair in a preset correction mode to obtain a plurality of correction parameters comprises:
and zooming the basic image pair to a preset resolution ratio, and carrying out pyramid correction processing to obtain the plurality of correction parameters.
3. The image correction method according to claim 1, wherein the step of obtaining the visible light image and the depth image of the target object, and transforming them to form a base image pair includes:
based on preset calibration parameters, transforming the depth image to an image coordinate system of the visible light image, and adjusting to obtain a preliminary alignment depth map having the same resolution as the visible light image, wherein the visible light image and the preliminary alignment depth map are combined to form the basic image pair, the first image is the visible light image, and the second image is the preliminary alignment depth map.
4. The image correction method according to claim 1, wherein the step of performing correction processing on the base image pair in a preset correction mode to obtain a plurality of correction parameters further comprises:
determining a target translation parameter and a target scaling factor between the first image and the second image;
determining a plurality of correction parameters based on the target translation parameter and the target scaling factor.
5. The image correction method according to claim 1, wherein before performing the correction processing on the base image pair using a preset correction mode to obtain a plurality of correction parameters, the image correction method further comprises:
preprocessing the preliminary alignment depth map in the basic image pair to obtain the first image;
and filtering the visible light image in the basic image pair to obtain the second image.
6. The image correction method according to claim 4, wherein the step of determining the target translation parameter and the target scaling factor between the first image and the second image comprises:
calculating the target translation parameter of the first image relative to the second image, and translating the first image based on the target translation parameter to obtain a third image;
selecting a plurality of scaling coefficients, scaling the third image by each scaling coefficient respectively, and calculating an image matching score between the third image and the second image;
and taking the scaling coefficient corresponding to the minimum score in the image matching scores as a target scaling coefficient.
7. The image correction method according to claim 4, wherein the step of determining the target translation parameter and the target scaling factor between the first image and the second image comprises:
calculating the target translation parameter of the first image relative to the second image, and translating the first image based on the target translation parameter to obtain a fourth image;
selecting a plurality of scaling coefficients, scaling the fourth image by each scaling coefficient respectively, and calculating an image matching score between the fourth image and the second image;
and adjusting the scaling coefficient until the change of the score in the image matching score is smaller than a first threshold value, and taking the scaling coefficient corresponding to the image matching score as a target scaling coefficient.
8. The image correction method according to claim 4, wherein the step of determining the target translation parameter and the target scaling factor between the first image and the second image comprises:
selecting a plurality of scaling coefficients, and scaling the first image by each scaling coefficient respectively;
sliding the first image after being scaled based on each scaling factor on the second image, and calculating a score of image matching between the first image and the second image;
and taking the scaling coefficient and the translation amount corresponding to the minimum score in the image matching scores as the target scaling coefficient and the target translation parameter.
9. The image correction method according to claim 5, wherein the step of preprocessing the preliminary alignment depth map in the base image pair to obtain the first image comprises:
mapping the depth value of each pixel point in the preliminary alignment depth map in the basic image pair to a preset pixel range; and/or the presence of a gas in the gas,
and adjusting the image contrast of the preliminary alignment depth map to obtain the first image.
10. The image correction method according to claim 4, wherein the step of determining the target translation parameter and the target scaling factor between the first image and the second image comprises:
extracting image features of the first image to obtain a first feature subset, wherein the first feature subset comprises a first distance image, a first boundary directional diagram and first mask information;
extracting image features of the second image to obtain a second feature subset, wherein the second feature subset comprises a second distance image and a second boundary directional diagram;
based on the first subset of features and the second subset of features, a target translation parameter of the first image relative to the second image is calculated.
11. The method according to claim 10, wherein the step of extracting the image feature of the first image to obtain a first feature subset comprises:
extracting all boundary pixel points of each target object in the first image to obtain a first edge image;
carrying out reverse color processing on the first edge image to obtain a second edge image;
extracting contours from the first edge image to obtain a first contour array, and calculating a pixel point direction corresponding to each pixel point based on the first contour array to obtain a first contour direction array;
based on a first preset distance threshold value, carrying out preset distance transformation processing on the second edge image to obtain a first distance image;
calculating the first boundary directional diagram corresponding to each target object boundary in the second edge image based on the first contour direction array;
determining the first subset of features based on the first range image and the first boundary orientation map.
12. The image correction method according to claim 11, wherein the step of performing a preset distance transform process on the second edge image based on a first preset distance threshold to obtain the first distance image includes:
determining the first mask information based on the first preset distance threshold, wherein the first mask information is used for shielding partial edge information in the second image;
adding the first mask information to the first subset of features.
13. The method according to claim 12, wherein the step of extracting the image feature of the second image to obtain a second feature subset comprises:
extracting all boundary pixel points of each target object in the second image to obtain a third edge image;
deleting the outline in the third edge image by adopting the first mask information;
performing reverse color processing on the third edge image subjected to the deleting processing to obtain a fourth edge image;
extracting contours from the fourth edge image to obtain a second contour array, and calculating a pixel point direction corresponding to each pixel point based on the second contour array to obtain a second contour direction array;
performing preset distance transformation processing on the fourth edge image based on a second preset distance threshold value to obtain a second distance image;
calculating the second boundary directional diagram corresponding to each target object boundary in the fourth edge image based on the second contour direction array;
and obtaining the second feature subset based on the second distance image and the second boundary directional diagram.
14. The image correction method according to claim 10, wherein the step of calculating the target translation parameter of the first image relative to the second image based on the first feature subset and the second feature subset comprises:
extracting contour pixel points with the pixel distance smaller than a first distance threshold value in the first distance image and the second distance image by adopting a first judgment condition to obtain a first contour pixel point set participating in image matching;
extracting contour pixel points with the pixel distance smaller than a second distance threshold value in the first boundary directional diagram and the second boundary directional diagram by adopting a second judgment condition to obtain a second contour pixel point set participating in image matching;
determining a chamfer distance score, a directional diagram distance and an image adjustment factor between the first image and the second image based on the first contour pixel point set and the second contour pixel point set, wherein the image adjustment factor is used for adjusting the chamfer distance score and the directional diagram distance proportion;
sliding the second image on the first image, inputting the chamfer distance score, the direction diagram distance and the image adjustment factor into a first preset formula, and calculating an image sliding score;
determining a target sliding position corresponding to the minimum score in all image sliding scores;
based on the target sliding position, a target translation parameter is determined.
15. The method of claim 2, wherein the step of scaling the base image pair to a predetermined resolution and performing pyramid correction to obtain the plurality of correction parameters comprises:
acquiring an alignment precision value applied by a terminal, and determining a plurality of correction resolutions based on the alignment precision value and the resolution of the base image pair, wherein the plurality of correction resolutions at least comprise: a preset resolution which is the minimum resolution of the plurality of correction resolutions;
and scaling the basic image pair to the preset resolution, and carrying out pyramid correction processing until the alignment precision value is met to obtain the correction parameters.
16. The image correction method according to claim 15, characterized in that the image correction method further comprises:
determining the image alignment requirement precision of the terminal application under the target image resolution;
step S1, determining whether the current alignment accuracy image corresponding to the target image pair meets the image alignment requirement accuracy at the first image resolution;
step S2, if it is determined that the current alignment accuracy image corresponding to the target image pair does not reach the required image alignment accuracy, adjusting the image resolution to a second image resolution, where a resolution value of the second image resolution is higher than the first image resolution;
step S3, performing a correction process on the basic image pair using a preset correction mode to obtain a plurality of correction parameters;
step S4, performing alignment correction on the base image pair based on each correction parameter to obtain a target image pair;
and repeatedly executing the steps S1 to S4 until the current alignment precision image reaches the image alignment requirement precision.
17. The image correction method according to claim 1, characterized in that the image correction method further comprises:
comparing the image resolution of the visible light image with the image resolution of the depth image to obtain a comparison result with the minimum resolution;
calculating a correction frequency threshold value based on the image resolution obtained by the comparison result and the initially set maximum correction processing resolution;
and in the alignment correction process, stopping correction processing if the image correction times reach the correction time threshold.
18. An image correction apparatus characterized by comprising:
the device comprises an acquisition unit, a processing unit and a processing unit, wherein the acquisition unit is used for acquiring a visible light image and a depth image which are shot for a target object, and forming a basic image pair after being changed, wherein the basic image pair comprises a first image and a second image;
the first correction unit is used for correcting the basic image pair by adopting a preset correction mode to obtain a plurality of correction parameters;
and the second correction unit is used for carrying out alignment correction on the basic image pair based on each correction parameter to obtain a target image pair.
19. An image correction system, comprising:
a first image capturing means for taking a visible light image of a target object;
a second image capturing means for taking a depth image of the target object;
the correcting device is used for acquiring a visible light image and a depth image which are shot for a target object, and forming a basic image pair after the visible light image and the depth image are changed, wherein the basic image pair comprises a first image and a second image; correcting the basic image pair by adopting a preset correction mode to obtain a plurality of correction parameters; carrying out alignment correction on the basic image pair based on each correction parameter to obtain a target image pair;
and the result output device is used for outputting the aligned target image pair to a preset terminal display interface.
20. An electronic device, comprising:
a processor; and
a memory for storing executable instructions of the processor;
wherein the processor is configured to perform the image correction method of any of claims 1 to 17 via execution of the executable instructions.
21. A computer-readable storage medium, comprising a stored computer program, wherein when the computer program is run, the computer-readable storage medium controls an apparatus to execute the image correction method according to any one of claims 1 to 17.
CN202011567624.3A 2020-12-25 2020-12-25 Image correction method, device and system and electronic equipment Pending CN114693760A (en)

Priority Applications (3)

Application Number Priority Date Filing Date Title
CN202011567624.3A CN114693760A (en) 2020-12-25 2020-12-25 Image correction method, device and system and electronic equipment
PCT/CN2021/141355 WO2022135588A1 (en) 2020-12-25 2021-12-24 Image correction method, apparatus and system, and electronic device
KR1020237021758A KR20230110618A (en) 2020-12-25 2021-12-24 Image correction method, device and system, electronic device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202011567624.3A CN114693760A (en) 2020-12-25 2020-12-25 Image correction method, device and system and electronic equipment

Publications (1)

Publication Number Publication Date
CN114693760A true CN114693760A (en) 2022-07-01

Family

ID=82130825

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202011567624.3A Pending CN114693760A (en) 2020-12-25 2020-12-25 Image correction method, device and system and electronic equipment

Country Status (3)

Country Link
KR (1) KR20230110618A (en)
CN (1) CN114693760A (en)
WO (1) WO2022135588A1 (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114842003A (en) * 2022-07-04 2022-08-02 杭州健培科技有限公司 Medical image follow-up target pairing method, device and application
WO2024087982A1 (en) * 2022-10-28 2024-05-02 华为技术有限公司 Image processing method and electronic device

Families Citing this family (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115293971B (en) * 2022-09-16 2023-02-28 荣耀终端有限公司 Image splicing method and device
CN115861088B (en) * 2022-10-20 2023-06-20 国科天成科技股份有限公司 Real-time correction method and system for non-uniformity drift of infrared camera
CN117689813B (en) * 2023-12-08 2024-08-16 华北电力大学(保定) Infrared three-dimensional modeling method and system for high-precision power transformer of transformer substation
CN118015677B (en) * 2024-01-09 2024-07-16 深圳市中研安创科技发展有限公司 Dithering repair system for hand-held face recognition terminal
CN117670880B (en) * 2024-01-31 2024-05-07 中成空间(深圳)智能技术有限公司 Detection and correction method and system for flexible photovoltaic cells
CN118368527B (en) * 2024-06-20 2024-08-16 青岛珞宾通信有限公司 Multi-camera panoramic camera image calibration method and system
CN118657672A (en) * 2024-08-19 2024-09-17 阿米华晟数据科技(江苏)有限公司 Method and device for fusing double-camera visible light and infrared images

Family Cites Families (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107852456B (en) * 2015-04-23 2020-06-26 富士胶片株式会社 Image processing device, imaging device, image processing method, and program
CN109035193A (en) * 2018-08-29 2018-12-18 成都臻识科技发展有限公司 A kind of image processing method and imaging processing system based on binocular solid camera
CN111757086A (en) * 2019-03-28 2020-10-09 杭州海康威视数字技术股份有限公司 Active binocular camera, RGB-D image determination method and device
CN111741281B (en) * 2020-06-30 2022-10-21 Oppo广东移动通信有限公司 Image processing method, terminal and storage medium

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114842003A (en) * 2022-07-04 2022-08-02 杭州健培科技有限公司 Medical image follow-up target pairing method, device and application
CN114842003B (en) * 2022-07-04 2022-11-01 杭州健培科技有限公司 Medical image follow-up target pairing method, device and application
WO2024087982A1 (en) * 2022-10-28 2024-05-02 华为技术有限公司 Image processing method and electronic device

Also Published As

Publication number Publication date
KR20230110618A (en) 2023-07-24
WO2022135588A1 (en) 2022-06-30

Similar Documents

Publication Publication Date Title
CN114693760A (en) Image correction method, device and system and electronic equipment
CN110248096B (en) Focusing method and device, electronic equipment and computer readable storage medium
CN107705333B (en) Space positioning method and device based on binocular camera
CN105374019B (en) A kind of more depth map fusion methods and device
WO2019085792A1 (en) Image processing method and device, readable storage medium and electronic device
US9251589B2 (en) Depth measurement apparatus, image pickup apparatus, and depth measurement program
US10304164B2 (en) Image processing apparatus, image processing method, and storage medium for performing lighting processing for image data
CN107948517B (en) Preview picture blurring processing method, device and equipment
CN107316326B (en) Edge-based disparity map calculation method and device applied to binocular stereo vision
JPWO2008108071A1 (en) Image processing apparatus and method, image processing program, and image processor
CN106952247B (en) Double-camera terminal and image processing method and system thereof
KR20220017697A (en) calibration method and apparatus among mutiple sensors
JP7156624B2 (en) Depth map filtering device, depth map filtering method and program
JP2020095621A (en) Image processing device and image processing method
CN110188640B (en) Face recognition method, face recognition device, server and computer readable medium
CN116051736A (en) Three-dimensional reconstruction method, device, edge equipment and storage medium
US8472756B2 (en) Method for producing high resolution image
JP2015207090A (en) Image processor, and control method thereof
CN114119701A (en) Image processing method and device
CN117058183A (en) Image processing method and device based on double cameras, electronic equipment and storage medium
CN117152330B (en) Point cloud 3D model mapping method and device based on deep learning
CN111630569B (en) Binocular matching method, visual imaging device and device with storage function
CN117522803A (en) Bridge component accurate positioning method based on binocular vision and target detection
CN112102347A (en) Step detection and single-stage step height estimation method based on binocular vision
JP7491830B2 (en) Apparatus, method and program for extracting silhouette of subject

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination