CN113793266A - Multi-view machine vision image splicing method, system and storage medium - Google Patents
Multi-view machine vision image splicing method, system and storage medium Download PDFInfo
- Publication number
- CN113793266A CN113793266A CN202111089334.7A CN202111089334A CN113793266A CN 113793266 A CN113793266 A CN 113793266A CN 202111089334 A CN202111089334 A CN 202111089334A CN 113793266 A CN113793266 A CN 113793266A
- Authority
- CN
- China
- Prior art keywords
- image
- parallax images
- images
- feature points
- points
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
- 238000000034 method Methods 0.000 title claims abstract description 94
- 238000003860 storage Methods 0.000 title claims abstract description 13
- 239000011159 matrix material Substances 0.000 claims abstract description 52
- 238000007781 pre-processing Methods 0.000 claims abstract description 23
- 238000001514 detection method Methods 0.000 claims abstract description 22
- 238000012216 screening Methods 0.000 claims abstract description 19
- 238000000280 densification Methods 0.000 claims abstract description 16
- 238000004422 calculation algorithm Methods 0.000 claims description 71
- 230000004927 fusion Effects 0.000 claims description 28
- 238000012545 processing Methods 0.000 claims description 15
- 238000012937 correction Methods 0.000 claims description 13
- 238000004364 calculation method Methods 0.000 claims description 9
- 238000009826 distribution Methods 0.000 claims description 7
- 238000004088 simulation Methods 0.000 claims description 3
- 230000000694 effects Effects 0.000 abstract description 27
- 238000005457 optimization Methods 0.000 abstract description 9
- 230000000007 visual effect Effects 0.000 abstract description 9
- 230000009466 transformation Effects 0.000 description 26
- 238000005516 engineering process Methods 0.000 description 14
- 230000008569 process Effects 0.000 description 9
- 238000010586 diagram Methods 0.000 description 8
- 230000006870 function Effects 0.000 description 8
- 238000003384 imaging method Methods 0.000 description 8
- 238000012544 monitoring process Methods 0.000 description 5
- 230000003287 optical effect Effects 0.000 description 5
- 238000013178 mathematical model Methods 0.000 description 4
- 230000002159 abnormal effect Effects 0.000 description 3
- 230000008901 benefit Effects 0.000 description 3
- 238000000605 extraction Methods 0.000 description 3
- 238000005259 measurement Methods 0.000 description 3
- 239000000306 component Substances 0.000 description 2
- 238000001914 filtration Methods 0.000 description 2
- 238000007526 fusion splicing Methods 0.000 description 2
- 238000012986 modification Methods 0.000 description 2
- 230000004048 modification Effects 0.000 description 2
- 238000010845 search algorithm Methods 0.000 description 2
- 239000013598 vector Substances 0.000 description 2
- 241001270131 Agaricus moelleri Species 0.000 description 1
- 230000004075 alteration Effects 0.000 description 1
- 150000001875 compounds Chemical class 0.000 description 1
- 239000008358 core component Substances 0.000 description 1
- 230000007547 defect Effects 0.000 description 1
- 238000013461 design Methods 0.000 description 1
- 238000011161 development Methods 0.000 description 1
- 238000003745 diagnosis Methods 0.000 description 1
- 238000002474 experimental method Methods 0.000 description 1
- 210000003701 histiocyte Anatomy 0.000 description 1
- 238000005286 illumination Methods 0.000 description 1
- 230000006872 improvement Effects 0.000 description 1
- 238000009776 industrial production Methods 0.000 description 1
- 230000009191 jumping Effects 0.000 description 1
- 238000011031 large-scale manufacturing process Methods 0.000 description 1
- 238000004519 manufacturing process Methods 0.000 description 1
- 238000013507 mapping Methods 0.000 description 1
- 238000007500 overflow downdraw method Methods 0.000 description 1
- 230000001575 pathological effect Effects 0.000 description 1
- 230000036544 posture Effects 0.000 description 1
- 238000012887 quadratic function Methods 0.000 description 1
- 238000011160 research Methods 0.000 description 1
- 230000004044 response Effects 0.000 description 1
- 230000007704 transition Effects 0.000 description 1
- 238000002604 ultrasonography Methods 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T3/00—Geometric image transformations in the plane of the image
- G06T3/40—Scaling of whole images or parts thereof, e.g. expanding or contracting
- G06T3/4038—Image mosaicing, e.g. composing plane images from plane sub-images
Landscapes
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Image Processing (AREA)
Abstract
The invention discloses a method, a device and a storage medium for splicing visual images of a multi-view machine, wherein the method comprises the following steps: collecting a plurality of groups of parallax images, and preprocessing the plurality of groups of parallax images; carrying out feature point detection and matching on the preprocessed multiple groups of parallax images to obtain a plurality of matched feature points, and screening the plurality of matched feature points to obtain a plurality of candidate homography matrixes; selecting a homography matrix with relatively divergent corresponding interior points in the candidate homography matrices as an optimized optimal homography matrix in parallax image registration; after quasi-densification is carried out on the registration errors of the sparse feature points, error compensation is carried out on the overlapping regions in the multiple groups of preprocessed parallax images; and the optimized suture line searching method seamlessly splices the multiple groups of parallax images. The optimal homography matrix is selected through optimization, and the optimized suture line searching method is adopted to carry out seamless splicing on the multiple groups of parallax images, so that the problem of poor image splicing effect is solved.
Description
Technical Field
The invention relates to the technical field of image splicing, in particular to a method and a device for splicing visual images of a multi-view machine and a storage medium.
Background
In recent years, image stitching technology has become an important subject of research in the fields of computer vision, digital image processing, computer graphics, and the like. When image acquisition is carried out, because the standard lens is limited to the field angle, a scene image with a larger field of view cannot be acquired simultaneously. When a short-focus lens is selected for image acquisition, the image resolution is sacrificed although more scene information can be acquired; if a long-focus lens is selected to ensure the image resolution, complete scene information cannot be obtained, so that the embarrassing situation is involved. At the moment, the introduction of the image splicing technology can well solve the problems, and meanwhile, a wide-field and high-resolution scene image is obtained. The image splicing technology can be widely applied to various fields such as machine vision detection, optical remote sensing, intelligent monitoring, virtual reality and medical image application and the like due to the advantages.
The task of image stitching is to align multiple overlapping images in one global image frame to generate a seamless high resolution panorama. In recent years, image stitching algorithms are widely applied to multiple fields and are closely related to the lives of people, for example, a panoramic camera in a smart phone needs to stitch multiple camera views in the security field, and view assistance is provided for automobiles in smart automobiles.
In the technical field of remote sensing, the method is limited by flight height, focal length of an airborne camera and a field angle, and complete target image information cannot be acquired. The remote sensing technology can match and splice two or more adjacent images which are collected and have partial overlapping areas in the same area after being combined with the image splicing technology. After the stitching is completed, the area wide field of view image is accurately presented. The remote sensing technology can be widely applied to military fields such as battlefield reconnaissance and the like and civil fields such as disaster monitoring and the like.
In the field of intelligent monitoring, real-time monitoring of an automatic driving navigation system is considered as one of the most representative applications. Because image information acquired by monocular vision is limited, the requirement of wide view field information in vehicle visual navigation cannot be met, and binocular vision has more advantages in view field. Therefore, by simulating binocular imaging to acquire image information, applying a multi-view image splicing technology, after receiving the wide-baseline binocular image acquisition image, splicing in real time to obtain scene information of a wide field of view in front of the vehicle, the problem of limited field of view of a visual navigation system is successfully solved, and perfect technical support is provided for automatic driving of vehicles.
In the field of virtual reality, people obtain a wide view field or a panoramic image by using an image splicing technology, and the wide view field or the panoramic image is used for a virtual reality scene and provides an immersive visual impact for an experimenter in vision. In a virtual reality system, a three-dimensional model is established by acquiring the depth of a panoramic image and three-dimensional information of a scene, and the three-dimensional scene is really restored.
In the field of medical image processing, limited by the small field of view of the microscope or ultrasound, physicians are unable to make a full picture of the pathological site without making a correct diagnosis based on the complete structural features. The medical microscopic image splicing technology can accurately and effectively solve the problems that the observation visual field is limited due to the small visual field under a high-power microscope, and the integral structure of the histiocyte cannot be clearly observed. Meanwhile, the medical image splicing technology can also be applied to systems for realizing remote consultation and the like. In the same principle, in industrial production, the image stitching technology can be used for accurate data measurement of large-scale production parts. Narrow-field images of the locally produced parts are acquired through a standard lens for splicing, so that wide-field images of the produced parts are obtained.
The image splicing technology is widely applied to various fields, and the wide-field imaging is promoted to become a future development direction and a necessary trend. In order to acquire wide-field image information, a series of images with overlapped areas in the same scene can be acquired by utilizing a plurality of standard lenses, and high-resolution images with large amplitude, wide visual angle, small distortion and no obvious ghost or crack can be acquired after image preprocessing, registration and fusion splicing.
The existing splicing methods which are mature in quantity are mostly suitable for splicing images without parallax or weak parallax, and when the parallax of the images to be spliced is large, the splicing effect can have the phenomena of obvious ghosting or serious object miscut and the like.
Thus, the prior art has yet to be improved and enhanced.
Disclosure of Invention
The invention mainly aims to provide a method and a device for splicing visual images of a multi-view machine and a storage medium, and aims to solve the problem that in the prior art, when the parallax of the images to be spliced is large, the splicing effect can generate obvious double images or serious object miscut and the like, so that the image splicing effect is poor.
In order to achieve the purpose, the invention adopts the following technical scheme:
a multi-view machine vision image stitching method comprises the following steps:
acquiring a plurality of groups of parallax images, and preprocessing the plurality of groups of parallax images;
carrying out feature point detection and matching on the preprocessed multiple groups of parallax images to obtain a plurality of matched feature points, and screening the matched feature points to obtain a plurality of candidate homography matrixes;
selecting a homography matrix with relatively divergent corresponding interior points in the candidate homography matrices as an optimized optimal homography matrix in parallax image registration, wherein the interior points are formed by the matched feature points;
after quasi-densification is carried out on the registration error of the sparse feature points, error compensation is carried out on the overlapping area in the plurality of groups of preprocessed parallax images;
the optimized suture line searching method seamlessly splices the multiple groups of parallax images.
In the method for stitching the multi-view machine vision images, the steps of detecting and matching feature points of the preprocessed multiple groups of parallax images to obtain a plurality of matched feature points, and screening the plurality of matched feature points to obtain a plurality of candidate homography matrixes specifically include:
detecting and matching the feature points in the plurality of groups of preprocessed parallax images by adopting an SIFT algorithm to obtain a plurality of matched feature points;
and screening the plurality of matched feature points by using a RANSAC algorithm to obtain a plurality of candidate homography matrixes.
In the method for stitching the multi-view machine vision images, the step of selecting the homography matrix with relatively divergent corresponding interior points from the multiple candidate homography matrices as the optimized optimal homography matrix in the parallax image registration specifically includes:
and selecting a homography matrix corresponding to the relative divergence of the inner points in the candidate homography matrixes as an optimized optimal homography matrix in the parallax image registration according to the distribution condition of the inner point set.
In the method for stitching the multi-view machine vision images, after quasi-densifying the registration error of the sparse feature points, the step of performing error compensation on the overlapping area in the plurality of groups of preprocessed parallax images specifically comprises:
and performing quasi-densification on the registration error of the sparse feature points in the interior point set by adopting an interpolation algorithm, and performing error compensation on the overlapped area in the plurality of groups of preprocessed parallax images.
In the multi-view machine vision image stitching method, the step of seamlessly stitching the multiple groups of parallax images by the optimized suture line searching method specifically includes:
optimizing a suture searching method by taking the distance between the inner point and the suture as the characteristic of weight constraint;
and seamlessly splicing the multiple groups of parallax images by adopting an optimized suture line searching method.
In the multi-view machine vision image stitching method, the step of seamlessly stitching the multiple groups of parallax images by the optimized suture line searching method comprises the following steps:
equally dividing the overlapping area in the seamlessly spliced multiple groups of parallax images into at least two parts, and respectively calculating the gray value of image pixel points in the overlapping area of the at least two parts so as to improve a gradual-in and gradual-out fusion algorithm;
and processing the seamlessly spliced pictures by adopting an improved gradually-in and gradually-out fusion algorithm.
In the method for stitching the multi-view machine vision images, the steps of acquiring a plurality of groups of parallax images and preprocessing the plurality of groups of parallax images specifically comprise:
acquiring a plurality of groups of parallax images by adopting at least two binocular vision cameras;
and denoising, distortion correction and image brightness unified processing are carried out on the multiple groups of parallax images to obtain the preprocessed multiple groups of parallax images.
In the multi-view machine vision image stitching method, the step of detecting and matching the feature points in the preprocessed multiple groups of parallax images by adopting an SIFT algorithm to obtain multiple matched feature points specifically comprises the following steps:
detecting extreme points of a scale space, wherein the scale space is obtained by multi-scale feature simulation of the feature points;
positioning the position and the scale of the key point;
determining the direction of the key point;
generating a feature point descriptor according to the key points;
and searching two correctly matched feature points according to the similarity between the feature point descriptors in different images to obtain a plurality of matched feature points.
In addition, to achieve the above object, the present invention further provides a multi-view machine vision image stitching apparatus, including: the preprocessing module is used for acquiring a plurality of groups of parallax images and preprocessing the plurality of groups of parallax images; the feature point detection and matching module is used for carrying out feature point detection and matching on the preprocessed multiple groups of parallax images to obtain a plurality of matched feature points, and screening the plurality of matched feature points to obtain a plurality of candidate homography matrixes; a selecting module, configured to select a homography matrix in which corresponding interior points in the plurality of candidate homography matrices are relatively divergent, as an optimized optimal homography matrix in parallax image registration; and the error compensation module is used for performing error compensation on the overlapping regions in the plurality of groups of preprocessed parallax images after quasi-densification is performed on the registration errors of the sparse feature points.
In addition, to achieve the above object, the present invention further provides a computer readable storage medium, where a multi-purpose machine vision image stitching program is stored, and when being executed by a processor, the multi-purpose machine vision image stitching program implements the steps of the multi-purpose machine vision image stitching method as described above.
Compared with the prior art, the method, the device and the storage medium for stitching the multi-view machine vision images provided by the invention comprise the following steps: collecting a plurality of groups of parallax images, and preprocessing the plurality of groups of parallax images; carrying out feature point detection and matching on the preprocessed multiple groups of parallax images to obtain a plurality of matched feature points, and screening the plurality of matched feature points to obtain a plurality of candidate homography matrixes; selecting a homography matrix with relatively divergent corresponding interior points in the candidate homography matrices as an optimized optimal homography matrix in parallax image registration; after quasi-densification is carried out on the registration errors of the sparse feature points, error compensation is carried out on the overlapping regions in the multiple groups of preprocessed parallax images; and the optimized suture line searching method seamlessly splices the multiple groups of parallax images. The optimal homography matrix is selected through optimization, and the optimized suture line searching method is adopted to carry out seamless splicing on the multiple groups of parallax images, so that the problem of poor image splicing effect is solved.
Drawings
FIG. 1 is a flow chart of conventional image stitching provided by the present invention;
FIG. 2 is a flow chart of a multi-view machine vision image stitching method provided by the present invention;
FIG. 3 is a diagram illustrating the effect of distortion correction provided by the present invention;
FIG. 4 is a flowchart of step S20 in the multi-view machine vision image stitching method provided by the present invention;
FIG. 5 is a flowchart of step S21 in the method for stitching multi-view machine vision images according to the present invention;
FIG. 6 is a comparison graph of local extremum detection and 26 neighboring points provided by the present invention;
FIG. 7 is a main direction selection diagram of feature points according to the present invention;
FIG. 8 is a dimensional feature point description subgraph provided by the present invention;
FIG. 9 is a diagram illustrating the effect of image system registration according to the present invention;
FIG. 10 is a diagram of an optimal projection plane selection provided by the present invention;
FIG. 11 is a code diagram of the improved RANSAC algorithm based candidate homography screening provided by the present invention;
FIG. 12 is a diagram illustrating the effect of optimizing image registration provided by the present invention;
FIG. 13 is a diagram of a dynamic programming suture lookup provided by the present invention;
FIG. 14 is a graph of color difference intensity after error compensation optimized alignment provided by the present invention;
FIG. 15 is a flowchart of step S50 in the method for stitching multi-view machine vision images according to the present invention;
fig. 16 is a block diagram of a multi-purpose machine vision image stitching apparatus provided by the present invention.
Detailed Description
In order to make the objects, technical solutions and effects of the present invention clearer and clearer, the present invention is further described in detail below with reference to the accompanying drawings and examples. It should be understood that the specific embodiments described herein are merely illustrative of the invention and are not intended to limit the invention.
As used herein, the singular forms "a", "an", "the" and "the" are intended to include the plural forms as well, unless the context clearly indicates otherwise. It will be further understood that the terms "comprises" and/or "comprising," when used in this specification, specify the presence of stated features, integers, steps, operations, elements, and/or components, but do not preclude the presence or addition of one or more other features, integers, steps, operations, elements, components, and/or groups thereof. It will be understood that when an element is referred to as being "connected" or "coupled" to another element, it can be directly connected or coupled to the other element or intervening elements may also be present. Further, "connected" or "coupled" as used herein may include wirelessly connected or wirelessly coupled. As used herein, the term "and/or" includes all or any element and all combinations of one or more of the associated listed items.
It will be understood by those skilled in the art that, unless otherwise defined, all terms (including technical and scientific terms) used herein have the same meaning as commonly understood by one of ordinary skill in the art to which this invention belongs. It will be further understood that terms, such as those defined in commonly used dictionaries, should be interpreted as having a meaning that is consistent with their meaning in the context of the prior art and will not be interpreted in an idealized or overly formal sense unless expressly so defined herein.
Image stitching is a technique for acquiring a high-resolution wide-view image by combining a plurality of images having overlapping regions. At present, the image stitching has a plurality of related theories and algorithms, the stitching process is basically consistent, and the main steps are two stages of image registration and image fusion. In the prior art, the following steps are generally adopted to stitch multiple groups of time difference images, and the flow is shown in fig. 1.
1. Image acquisition:
the image acquisition is the first step of completing the image splicing process, and the quality of the acquired image can influence the quality of later-stage video splicing. The acquisition of images of a scene containing overlapping portions is now generally accomplished using two specific image acquisition devices. In order to obtain more reliable and effective image scene information, the following basic requirements should be followed.
(1) Experiments were performed using image acquisition tools with completely identical basic parameters.
(2) In the process of image acquisition, in order to ensure that the acquired images have a certain overlapping area, a certain included angle is required between two devices, and the acquisition devices are arranged on the same horizontal plane as much as possible.
(3) The difference between the imaging of the device is reduced by ensuring that the acquisition device is at a certain distance from the object to be photographed.
In the acquisition process, the camera lens is kept not to shift, and the camera lens is ensured to acquire a group of images under the concentric constraint condition as much as possible, so that the acquired images meet the condition that the overlapping area of adjacent source images is not less than 30% as much as possible, and thus, enough feature points can be extracted from the overlapping area for registration.
2. Image preprocessing:
due to the influence and limitation of factors such as shooting equipment, scenes and the like, an ideal image which can be directly tested is difficult to acquire. Therefore, a preprocessing operation must be performed on the acquired images before the image stitching operation is performed. In recent years, preprocessing operations such as image denoising and the like on images acquired by monitoring equipment are generally completed by a background computer. Image preprocessing generally also includes operations such as image brightness uniformity, image projection, and image de-dithering.
The image preprocessing mainly comprises the operations of filtering and denoising, image distortion correction, image brightness unification and the like. The image denoising is mainly used for eliminating the interference of noise points in the image on the feature detection and matching.
The image distortion correction is inevitable due to the existence of optical lenses and the like as core components in the lens. However, the degree of distortion of the lens is affected by the precision of the optical lens, and the degree of distortion of the standard lens of the high-precision optical lens is relatively small, while the degree of distortion of the standard lens of the general quality is slightly larger, and the degree of distortion of the wide-angle lens is larger.
The image distortion mainly includes radial distortion and tangential distortion. In an actual lens, the tangential distortion is hardly considered. The radial distortion degree extends outwards along the radial direction, and the more the edge is, the larger the distortion degree is, and the more serious the image information loss is. If the lens has radial distortion, the lens needs to be calibrated. The distortion parameters obtained through the calibration of the lens are applicable to all images collected under the lens. And distortion correction is carried out to eliminate distortion existing in the image and obtain a standard scene image, so that the interference of the distortion on the image processing result is eliminated.
When image acquisition is carried out, a binocular camera is generally adopted, and because the lenses with the same model are used, namely the parameters are completely consistent, only one of the camera lenses needs to be calibrated. The camera lens is calibrated by a chessboard calibration method through a calibration tool in MATLAB and a chessboard hard board. The detailed steps are as follows:
(1) the checkerboard image is prepared, printed on A4 paper, and stapled to cardboard as a calibration sheet. The lens is fixed, the chessboard calibration plate is changed into different postures, about 30 groups of images are collected, and after 10 groups of abnormal images are eliminated, 20 groups of calibrated grid images are given.
(2) And opening a Camera Calibration Toolbox in the MATLAB software menu bar function option, and leading all the acquired images into the box. After clicking a calibration key by a mouse, calculating and generating relevant parameters of the lens, then importing the parameters into a file for storage to obtain a whole calibration interface, and performing distortion correction on the image acquired by the lens by only referring to the parameters in the file at the later stage without repeatedly calibrating.
(3) And opening the file to directly check various parameters of the lens. The Radial Distortion parameter is saved in the Radial Distortion variable. And the Tangential Distortion Tagential Distortion variable corresponds to the parameter 0, which indicates that the lens has no Tangential Distortion, and verifies that the Tangential Distortion can be ignored in the lens Distortion.
When the camera is shooting images, the lens has a dark corner. The dark corner is the phenomenon that the image shot by the lens has bright middle and dark edge due to the manufacturing of the lens and other optical factors. In order to keep the brightness of the picture basically consistent, the image can be subjected to the operation of uniform brightness to eliminate the influence caused by the dark corner. In the image stitching process, the overlapping part is generally opposite to the changing angle of the vignetting, which causes the brightness of the overlapping part to be different. In order to ensure that the finally obtained panoramic image is more real, the images collected by the plurality of cameras need to be respectively subjected to image brightness unification operation.
3. Image registration: and performing feature extraction and matching on the images with the overlapped regions by using a local feature extraction algorithm (such as SIFT), and estimating transformation parameters among the feature points by combining the matching relation among the feature points, thereby determining a corresponding projection transformation model from the target image to the reference image.
The image registration technology is a more critical part of the image stitching operation. There are many ways to achieve image registration, and a feature-based registration method is currently a more common method. The registration mode has certain requirements on illumination, noise and the like of the image. Common configuration algorithms can be classified into the following three categories according to the difference of feature selection:
(1) feature point based registration. The feature points are points which can uniquely identify certain position information in the image and carry important information, and the points contain a large amount of information and show features different from other pixel points or points with sudden changes of pixel values. The algorithm for realizing image matching based on the information contained in the special points in the image is a registration algorithm based on the characteristic points.
(2) Feature region based registration. The feature region corresponds to a feature point of magnification. A single pixel is a feature point and the collection of neighboring pixels is called a region. The region has a collection of pixels with the same characteristics, and the whole region shows a certain same characteristic, so that a specific position in the image can be specifically identified. If the set regions including the same feature are extracted from the two images, it is determined that the two images have position correlation, and the corresponding position between the two images after matching can be solved by making full use of the correlation corresponding to the feature regions.
(3) Registration based on edge features. Used to outline the content of an image are the edges of the image, which can be better used to identify the orientation features of the image. The edge information of images having the same content will necessarily remain consistent. The matching algorithm based on the edge feature association can well realize image splicing. In the process of stitching by using the algorithm, the edge data of the image must be stored according to a specific data structure, but the application range of the algorithm is limited because no suitable expression mode exists at present.
4. Image fusion: in order to optimize the image splicing effect and improve the quality of spliced images, after image registration is completed, different image overlapping areas are mapped to the same coordinate system through rigid transformation, and differences among pixel points are eliminated. The method is often used in the last link of the splicing of the parallax-free or weak parallax images, and the scene information is more truly restored.
According to the method, the device and the storage medium for splicing the multi-view machine vision images, the feature points of the preprocessed multiple groups of parallax images are detected and matched to obtain multiple matched feature points, the optimal homography matrix is optimized according to the distribution situation of the matched feature points, and finally the multiple groups of parallax images are seamlessly spliced by adopting the optimized suture line searching method, so that the phenomena of obvious double images or serious object miscut and the like of the splicing effect when the parallax of the images to be spliced is larger are effectively avoided, and the problem of poor image splicing effect is solved.
The following describes a design scheme of a multi-purpose machine vision image stitching method by using specific exemplary embodiments, and it should be noted that the following embodiments are only used for explaining technical solutions of the invention, and are not specifically limited:
referring to fig. 2, the method for stitching multiple machine vision images according to the present invention includes:
s10, collecting multiple groups of parallax images and preprocessing the multiple groups of parallax images;
s20, carrying out feature point detection and matching on the preprocessed multiple groups of parallax images to obtain a plurality of matched feature points, and screening the plurality of matched feature points to obtain a plurality of candidate homography matrixes;
s30, selecting a homography matrix with relatively divergent corresponding interior points in the candidate homography matrices as an optimized optimal homography matrix in parallax image registration, wherein the interior points are formed by the matched feature points;
s40, after quasi-densification is carried out on the registration error of the sparse feature points, error compensation is carried out on the overlapping area in the plurality of groups of preprocessed parallax images;
and S50, seamlessly splicing the multiple groups of parallax images by the optimized suture line searching method.
Specifically, at the image acquisition stage, a multi-lens is used for simulating parallax imaging conditions to acquire a plurality of groups of parallax images, and preprocessing such as filtering, denoising and image distortion correction is performed on the overlapped parts of the plurality of groups of parallax images, so that the interference of noise points in the images on feature detection and matching and the interference of lens imaging distortion on the parallax images can be effectively eliminated.
Secondly, in an image registration stage, feature point detection and matching are carried out on a plurality of groups of preprocessed parallax images to obtain a plurality of matched feature points, transformation parameters among the feature points are estimated by combining the matching relations among the plurality of matched feature points, and therefore a projection transformation model corresponding to a target image and a reference image is determined, namely the plurality of matched feature points are screened to obtain a plurality of candidate homography matrixes, wherein the homography matrixes are the projection transformation model.
Then, forming an interior point by the plurality of matched feature points, and selecting a homography matrix in which corresponding interior points in the plurality of candidate homography matrices are relatively diverged as an optimized optimal homography matrix in parallax image registration; after quasi-densification is carried out on the registration errors of the sparse feature points, error compensation is carried out on the overlapping regions in the multiple groups of preprocessed parallax images, so that image alignment is optimized, and the local alignment regions of the images are increased; and finally, seamlessly splicing the multiple groups of parallax images by an optimized suture line searching method, thereby completing image splicing.
According to the invention, the characteristic points of the preprocessed multiple groups of parallax images are detected and matched to obtain multiple matched characteristic points, homography matrixes with relatively divergent corresponding inner points in multiple candidate homography matrixes are selected, the optimal homography matrix is selected, and the multiple groups of parallax images are seamlessly spliced by adopting an optimized suture line searching method, so that the phenomena of obvious double images or serious object miscut and the like in the splicing effect when the parallax of the images to be spliced is large are effectively avoided, and the problem of poor image splicing effect is improved.
Further, the step of acquiring multiple groups of parallax images and preprocessing the multiple groups of parallax images specifically includes:
acquiring a plurality of groups of parallax images by adopting at least two binocular vision cameras;
and denoising, distortion correction and image brightness unified processing are carried out on the multiple groups of parallax images to obtain the preprocessed multiple groups of parallax images.
Specifically, firstly, simulating parallax imaging conditions by using a multi-view lens, acquiring multiple groups of parallax images, and secondly, denoising the multiple groups of parallax images, so that when multiple groups of parallax images to be spliced are downloaded from other image data sets, the influence of noise possibly existing in digital images on image registration and splicing effects can be effectively reduced; then, distortion correction processing is carried out on the multiple groups of denoised parallax images, as shown in fig. 3, the left side is a distortion source image, the right side is a correction effect image, and the slightly distorted phenomenon of the window outline is obviously seen at the position of an artificial building in the distortion source image. And finally, carrying out image brightness unified processing on the multiple groups of parallax images after the distortion correction to obtain the multiple groups of parallax images after the preprocessing.
The method comprises the steps of collecting multiple groups of parallax images by adopting at least two binocular vision cameras, and carrying out denoising, distortion correction and image brightness unified processing on the multiple groups of parallax images to obtain preprocessed multiple groups of parallax images. Therefore, the influence of noise possibly existing in the digital image on image registration and splicing effect is effectively reduced, the influence caused by dark corners is eliminated, and the final spliced image is more real and has better splicing effect.
Further, referring to fig. 4, the step of performing feature point detection and matching on the preprocessed multiple groups of parallax images to obtain multiple matched feature points, and screening the multiple matched feature points to obtain multiple candidate homography matrices specifically includes:
s21, detecting and matching the feature points in the preprocessed multiple groups of parallax images by adopting an SIFT algorithm to obtain a plurality of matched feature points;
s22, screening the matched feature points by using a RANSAC algorithm to obtain a plurality of candidate homography matrixes.
Further, referring to fig. 5, the step of detecting and matching the feature points in the preprocessed multiple groups of parallax images by using the SIFT algorithm to obtain multiple matched feature points specifically includes:
s211, detecting extreme points of a scale space, wherein the scale space is obtained by multi-scale feature simulation of the feature points;
s212, positioning the positions and the scales of the key points;
s213, determining the direction of the key point;
s214, generating a feature point descriptor according to the key points;
s215, two correctly matched feature points are searched according to the similarity between the feature point descriptors in different images, and a plurality of matched feature points are obtained.
Specifically, after the multiple groups of parallax images are preprocessed, an image registration operation is performed, and first, feature points in the preprocessed multiple groups of parallax images are detected and matched by adopting an SIFT algorithm to obtain multiple matched feature points.
When SIFT feature point detection is carried out on feature points in a plurality of groups of preprocessed parallax images, the method specifically comprises the following steps:
firstly, the method comprises the following steps: and detecting extreme points of the scale space. The scale space aims to simulate the multi-scale features in the preprocessed multiple groups of parallax images, and the SIFT algorithm detects some possible feature points which can keep stable for scale scaling and rotation transformation by adopting a Difference of Gaussian (DOG) operator. The large scale focuses on the profile features of the image, while the small scale focuses more on the image detail features. In order to detect stable key points in the scale space, a difference Gaussian scale space is proposed. While each point detected is compared to the 26 points surrounding it, as shown in fig. 6, in order to ensure that extreme points can be extracted in both scale space and two-dimensional image space.
Secondly, the method comprises the following steps: the location and scale of the keypoints are located. The positions and the scales of key points are accurately positioned by fitting a three-dimensional quadratic function, the accuracy is even accurate to a sub-pixel level, and meanwhile, the key points with low contrast and unstable edge response points are screened out, so that the stability of feature matching is enhanced, and the noise resistance is improved.
Thirdly, the method comprises the following steps: determining the direction of the keypoint. The SIFT feature point direction is determined by the gradient direction distribution among the neighborhood pixels, and the feature point is used as the center to sample the neighborhood window. The gradient direction of pixels in the neighborhood is counted through the histogram, the direction corresponding to the highest column in the histogram is taken as the main direction of the feature point, if the height of another column is 80% of the highest column, the direction is taken as the auxiliary direction, and therefore the stability of the feature point can be improved. The feature point principal direction calculation is shown in fig. 7.
Fourthly: and generating a feature point descriptor according to the key points. The SIFT feature point descriptor has the characteristics of good stability, uniqueness and the like, and describes the feature point through a group of vectors. When the feature point is located in a space with a size of 16 × 16, 8 pieces of directional gradient information are corresponded, that is, a 128-dimensional vector is a feature point descriptor, as shown in fig. 8.
And then, SIFT feature point matching is carried out on the feature points in the multiple groups of parallax images, the feature point matching benefits from the uniqueness and uniqueness of the feature point descriptors to a great extent, and the feature point matching comprises two technologies of feature point descriptor similarity measurement and feature point matching search. After the image to be registered is subjected to feature detection and extraction by an SIFT algorithm, a large number of sparse feature points are obtained, two correctly matched feature points are found through the similarity of feature point descriptors between different images, and therefore a plurality of matched feature points are obtained.
However, due to the repeatability and complexity of the multi-group time difference image content, a certain probability of mismatching may exist when searching for feature point matching, and even more mismatching point pairs are enough to influence the direct calculation of the image transformation parameters. This requires consideration of how to exclude mismatching pairs and calculate image transformation parameters using only exact matching pairs of feature points.
Therefore, a RANSAC algorithm is required to be used for screening the plurality of matched feature points to obtain a plurality of candidate homography matrixes. The correctly matched characteristic point pairs are screened from a plurality of mismatching points and used for calculating image transformation parameters, and a mathematical model function is estimated. Namely, a plane scene transformation model conforming to the image to be registered is estimated from a plurality of matched feature points by using a RANSAC algorithm, so that a plurality of candidate homography matrixes can be obtained.
The RANSAC algorithm, as the most effective and widely used mismatch removal algorithm, estimates a mathematical model function that should be satisfied by correct data in a group of data according to a group of original data sequences containing abnormal data, and rejects abnormal points and stores the correct matching points as an interior point set. The traditional method is to adopt RANSAC algorithm to screen out the optimal homography matrix from a plurality of matched characteristic points, wherein 4 pairs of matching points are randomly selected to calculate a plane scene image transformation model, then the model is recalculated and updated through the rest matching points, and a coordinate transformation model which accords with the most matching point pairs, namely the optimal homography matrix, is calculated through multiple iterations. Similarly, all the matching points in the target image which satisfy the homography matrix are mapped to a plane in the space after two-dimensional projection transformation, and the plane is called as an optimal projection plane. However, the transformation model screened by maximizing the number of matching points by the RANSAC algorithm has serious disadvantages because it does not consider how the feature points are distributed in the image and whether the alignment degree of the image after two-dimensional mapping transformation is optimal or not by the transformation model. If the feature point set is distributed too intensively, even if a large number of feature points can be roughly aligned after image registration, alignment of concentrated regions of the feature points can only be ensured, and severe ghosting occurs in most of the rest regions.
The calculation of the homography matrix comprises the following steps:
in image registration, two adjacent images Il and Ir have corresponding overlapping regions Ilo and Iro; to measure the similarity of Ilo and Iro in image content, i.e., S (Ilo, Iro), the spatial transformation relationship of Ilo and Iro is estimated by a series of feature point matching relationships between Ilo and Iro, thereby maximizing S (h (ilo), Iro); the whole process of obtaining the optimal H is image registration and an image registration algorithm based on regional similarity measurement.
Since the image registration algorithm based on the point features appears, the matching is searched in a self-adaptive manner mainly according to the uniqueness of the point feature descriptors, and the RANSAC algorithm estimates a mathematical model of image transformation, namely a projection transformation relation, according to the matching point pairs, and the corresponding mathematical model is a homography matrix.
The homography matrix is a size matrix, as shown by the formula:
in the formula, H1-H8 refer to 8 variables in the homography H, namely 8 degrees of freedom.
The 8 degrees of freedom in the homography H means that at least 4 pairs of characteristic points are required to compute 8 variables one by one. In the actual image registration, the number of the matching pairs of the characteristic points is far more than 4, so that the accuracy of the homography matrix obtained by calculation is directly promoted to even reach the sub-pixel level through continuous fitting optimization. By (x)l,yl) And (x)r,yr) Calculating an optimal homography matrix as shown by the formula:
in the formula (x)l,yl) The coordinate of a matching point in an image to be registered is obtained; (x)r,yr) Refers to the coordinates of the matching points in the image to be registered.
The traditional image registration adopts SIFT algorithm to extract feature points, and then combines RANSAC algorithm dynamic programming to estimate a projection transformation model, so that an effect graph after traditional image registration and fusion splicing can be independently shown in the system, as shown in FIG. 9.
Further, the invention optimizes the selection of the optimal homography matrix as follows:
and S31, selecting a homography matrix corresponding to the relative divergence of the inner points in the candidate homography matrixes as the optimized optimal homography matrix in the parallax image registration according to the distribution condition of the inner point set.
Specifically, when a planar scene transformation model conforming to the image to be registered is estimated from a plurality of matched feature points by using a RANSAC algorithm, an image transformation model satisfying more matched point pairs and having more divergently distributed interior points is selected as an optimal homography matrix in combination with the distribution of an interior point set, so that coarse alignment of more regions of the parallax image after registration is effectively ensured, as shown in fig. 10, wherein a depth of field direction represented by a Z axis, a line 1 represents actual scene depth information, and a projection plane screened by using a conventional RANSAC algorithm represented by a line 3 is matched according to the projection plane, only coarse alignment of a region B is ensured, and severe ghost phenomena exist in regions a and a region C far away from the region B, and the relative alignment degree is poor. Similarly, line 2 represents the projection plane corresponding to the RANSAC algorithm modified in the present invention, so that the B region is sacrificed to ensure that the a and C regions are well aligned.
When the RANSAC algorithm iteratively estimates the homography matrix each time, the number of the characteristic points in the interior point set is adopted to screen out a proper candidate homography matrix, and specific codes refer to fig. 11, wherein eta value can be set according to requirements.
In image projection transformation registration under parallax imaging, even if corresponding interior points all meet the projection model, the corresponding interior points are not necessarily completely located on a projection plane, but only within a certain error allowable range. Therefore, at this time, it is necessary to quasi-densify the registration error of the sparse feature points, and compensate the error of the overlapping region in the entire preprocessed multiple groups of parallax images. The method comprises the following specific steps:
and S41, performing quasi-densification on the registration errors of the sparse feature points in the interior point set by adopting an interpolation algorithm (such as a TPS function), and performing error compensation on the overlapped regions in the plurality of groups of preprocessed parallax images. After the registration error of the sparse feature points is subjected to quasi-densification by the TPS function, the image overlapping region is subjected to registration error compensation optimization alignment, and accurate alignment of all pixel points with smaller errors is realized, so that the alignment region of the image is increased, and a foundation is laid for later-stage suture line searching.
The optimal registration algorithm of the invention considers the characteristic of distribution of the inner point set, and obtains the optimal homography matrix suitable for the scene through screening. Next, after quasi-densification of the registration error by using the TPS function, error compensation is performed to optimally align the image micro-deformation, and a local alignment area of the image is increased, as shown in fig. 12.
And after the error compensation is finished, entering an image fusion stage.
When images are fused, when a moving object exists in an overlapped area, the spliced images are easy to generate double images and the like, and at the moment, the optimal suture line algorithm is an effective method for solving the problem. If a seam having the smallest color and structural strength exists in the difference image of the image overlapping region, the seam is called an ideal seam.
Suture line search algorithm based on dynamic programming:
the suture lookup strategy must satisfy two conditions: firstly, the color difference value or the brightness difference value of the two registered images on the suture line is minimum; secondly, the geometry of the region adjacent to the suture line from the two images has the smallest difference in the geometry of the images.
Dynamic planning strategies have been used in computing optimal suture search procedures as an optimization method for use in multi-stage decision problem processing.
The optimal suture line searching algorithm mainly comprises the following steps:
(1) starting from the first row of pixel points in the image overlapping region, each pixel point is used as an initial suture line, as shown in fig. 13, the initial suture line is p1~p7In the figure, dots represent pixel points, the solid line is the suture line search path, and the dotted line corresponds to p4The pre-pixel point path inserted into the suture will be considered.
(2) And the next pixel point of the current pixel point, the color difference value of the left and right adjacent pixel points and the sum of the difference values of the geometric structure are compared, and the point with the minimum summation value is added to the current suture line to serve as the last pixel point, so that the expansion extension of the current suture line is completed.
(3) And (3) updating the suture line set when each layer of pixel point is added to the suture line set, and repeating the operation in the step (2) until the last row of the overlapping area.
(4) And selecting the suture line with the minimum cost function value from all the suture line sets, wherein the corresponding suture line path is the optimal suture line.
Obviously, based on the dynamic planning method, the optimal suture line can be found finally, but the whole overlapping area must be traversed every time the suture line is found, so that the efficiency is low. Similarly, since the dynamic programming method is an optimization method based on greedy search, a global optimal solution cannot be obtained in advance because there is no critical condition for jumping out at a local optimal solution. And on the whole, the suture line searching algorithm based on the dynamic planning strategy has defects in stability and accuracy.
After the parallax images are subjected to global registration and error compensation optimization alignment, the color intensity or brightness of pixel points in most alignment areas is almost consistent. Of course, due to the parallax of the image, there is a phenomenon that different scene depths cause different imaging sequences on a plane or directly cause shielding, resulting in a large color difference value of pixel points in a corresponding region, so that a large amount of pixel points are gathered and directly appear as a ghost image, which is an area that a suture line should avoid, as shown in fig. 14.
The invention uses dynamic programming idea to find an optimal suture line, which comprises the following steps:
(1) calculating a suture line taking each pixel point of the line as a starting point from the line 1 in the overlapping area of the multiple groups of parallax images, taking the standard value of the suture line as an intensity value, and taking the column value as the current point of the suture line;
(2) determining the expansion direction of the suture line: comparing the criterion values of 3 points in the next row next to the current point of each suture line with the criterion values of two adjacent points on the left and right of the current point, wherein the point with the minimum criterion value is an expansion point, and if the criterion value point is on the suture line, selecting a next minimum value point;
(3) if the current point of the suture line is the point of the last line of the overlapped image, the step (4) is carried out, otherwise, the step (2) is returned, and the next expansion is continued;
(4) and in all the suture lines, the minimum criterion value is the best suture line.
Wherein the optimal suture solution criterion is as follows:
E(x,y)=Ecolor 2(x,y)+Egeometry(x,y) (1);
in the formula: ecolor(x, y) represents an image color difference intensity value; egeometry(x, y) represents an image structure difference intensity value. Wherein E isgeometryThe solving formula of (x, y) is as follows:
Egeometry(x,y)=[Sx*(I1(x,y)-I2(x,y))]2+[Sy*(I1(x,y)-I2(x,y))]2;
in the formula: sx and Sy represent the templates in x and y directions of 3 × 3 Sobel operators (Sobel operators are discrete differential operators for calculating approximate gradients of image gray scales, and edges are more likely to be the larger the gradient is), respectively.
Further, referring to fig. 15, the method for finding a suture according to the present invention is optimized as follows: the step of seamlessly stitching the multiple groups of parallax images by the optimized suture line searching method specifically comprises:
s51, optimizing a suture searching method by taking the distance between the inner point and the suture as the characteristic of weight constraint;
and S52, seamlessly splicing the multiple groups of parallax images by adopting an optimized suture line searching method.
Specifically, after coarse registration is completed by parallax image projection transformation, part of feature points with larger errors are removed by screening out registration error values, so that an inner point set is reduced. After error compensation, the inner point set can be aligned accurately, and alignment of the natural characteristic points means that the surrounding areas can be aligned well. Therefore, by minimizing the geometric distance from the precisely aligned feature points to the stitch line, the stitch line is made as close as possible to the aligned feature points, i.e., to the alignment area, thereby achieving seamless stitching of the parallax images.
Although, all the pixels and their neighborhoods on an ideal suture are perfectly aligned, this means that the suture can better pass through the continuous alignment area. However, the conventional suture line searching algorithm cannot ensure that continuous alignment areas exist on the basis of rigid transformation registration, so that the algorithm can search for an optimal suture line by minimizing a pixel point color or brightness difference value and a geometric structure difference value on the suture line.
After the alignment area is increased by optimizing the registration algorithm, the relationship between the precisely aligned inner points and the suture line is considered, the geometric distance from each inner point to the suture line is calculated, and the registration error value corresponding to the inner point in rigid registration is taken as the influence weight on the pixel point on the suture line, so that the weight calculation function of the inner point is designed as follows:
si=max(exp(-||X-Xi||2/σ2),δ);
in the formula, XiSet of interior points SinlierCoordinates of the middle-I characteristic point; sigma is a scale parameter; δ is a constant and has a small value between 0 and 1.
The following weight constraint function for a suture by computing the interior point is:
in the formula (I), the compound is shown in the specification,set of interior points SinlierError values of the medium feature points in the coarse registration; n is an inner point set SinlierThe number of middle feature points.
Therefore, the stitching of the large parallax images is optimized by combining two large features of color difference and geometric structure difference in a stitching search algorithm and taking the geometric distance from the precisely aligned feature point to the stitching as weight constraint. The optimized suture line finding method has the following three main characteristics:
(1) and color difference: after the parallax images are registered, the sum of the color difference intensity values of the pixel points near the suture line on the two registered images is minimum;
(2) geometric structural difference: the structural difference intensity values of the images after registration of suture line neighborhood pixel points are most similar, namely the geometric structural difference value of the two images after registration is minimum;
(3) and weight constraint: and the geometric distance from the precise alignment characteristic point to the suture line point in the inner point set is used as the weight constraint of the inner point to the suture line, and the sum of the geometric distances from the inner point set to the suture line is minimized.
Wherein the cost function used to calculate the optimized suture is:
E(x,y)=αEc(x,y)+βEg(x,y)+γEw(x,y);
in the formula, alpha is a color difference term influence factor; beta is a geometric structural difference term influence factor; gamma is a weight constraint term influence factor.
The invention can independently present the splicing effect of the parallax images searched and completed by the optimized suture line, and simultaneously provides the realization of the optimal suture line splicing algorithm for presenting the splicing effect of the parallax images. Moreover, aiming at the same group of input parallax source images, the stitching effect of simultaneously showing the two suture line searching algorithms is provided, the optimal suture line path can be selectively depicted, and the stitching effect of the two suture line searching algorithms can be conveniently and visually compared and analyzed.
After the image stitching is finished, the gray values of the pixels in the overlapped region in the stitched image are generally processed by adopting a gradual-in and gradual-out fusion algorithm.
The gradual-in and gradual-out fusion algorithm is used for processing the gray value of the pixel points in the overlapping area, and the calculation formula is as follows:
in the formula: f (x, y) represents the gray value of the image pixel point after fusion; f. of1(x, y) represents the gray value of the pixel point of the left image to be spliced; f. of2(x, y) represents the gray value of the pixel point of the right image to be spliced; w is a1、w2Is a corresponding weight and has w1+w2=1,0<w1<1,0<w2<1。
According to a fade-in fade-out fusion algorithm, w1、w2The calculation formula of (2) is as follows:
in the formula: x is the number ofiThe abscissa of the current pixel point is; x is the number oflIs the overlap region left boundary; x is the number ofrThe overlap region right boundary.
Further, the invention improves the gradual-in and gradual-out fusion algorithm, and the improvement is as follows: the step of seamlessly stitching the plurality of groups of parallax images by the optimized stitching line finding method comprises the following steps:
s60, equally dividing the overlapping area in the seamlessly spliced multiple groups of parallax images into at least two parts, and respectively calculating the gray value of image pixel points in the overlapping area of the at least two parts so as to improve a gradual-in and gradual-out fusion algorithm;
and S70, processing the seamlessly spliced picture by adopting an improved gradually-in and gradually-out fusion algorithm.
Specifically, after the two images are spliced along the optimal suture line, the gray value of the pixel point in the overlapped area of the fused images is determined by utilizing an improved gradual-in and gradual-out fusion algorithm rule.
Aiming at the overlapping area of the images after seamless splicing, in order to achieve better fusion effect, the images are subjected to image fusionThe overlapping area is equally divided into n parts, and in consideration of algorithm time and practical effect, n is preferably equal to 3, namely, the overlapping area is divided into P parts in sequence from left to right in an average manner1、P2、P3And (4) three parts. For P1And P3Two parts, respectively, by comparison f1(x,y)、f2(x, y) and fmeanThe absolute value of the difference and the set threshold H determine the gray value f (x, y) of the pixel point of the fused image, and for P2Moiety, fmeanNamely the gray value of the image pixel point after fusion.
The specific implementation process is as follows:
based on empirical values, the threshold H is set as follows:
fmeanas shown in the following formula: f. ofmean=w1f1(x,y)+w2f2(x,y);
At P1Partially, the gray value f (x, y) of the fusion image pixel point is:
at P2Partially, the gray value f (x, y) of the fusion image pixel point is: f (x, y) ═ fmean;
At P3Partially, the gray value f (x, y) of the fusion image pixel point is:
finally, the invention proposes a stitching algorithm combining an optimal suture with an improved fade-in fade-out fusion algorithm:
in order to solve the problems of splicing seams, double images and the like caused by obvious interference or exposure difference of moving objects, a splicing algorithm combining an optimal suture line and an improved gradually-in and gradually-out fusion algorithm is provided. Therefore, moving objects are effectively avoided, the gray level transition on the two sides of the image is uniform, double images and splicing seams are avoided, and the quality of the fused image is improved to a certain extent.
The algorithm comprises the following steps:
(1) firstly, correcting the image to be spliced to avoid the interference of image distortion;
(2) searching an optimal suture line in an overlapping area of the images to be spliced according to a dynamic planning idea;
(3) splicing the two images to be spliced along the optimal suture line;
(4) and finally, fusing by adopting the improved gradually-in and gradually-out fusion method.
Further, referring to fig. 16, the present invention also provides a multi-view machine vision image stitching apparatus, including:
the preprocessing module 100 is configured to acquire multiple groups of parallax images and preprocess the multiple groups of parallax images; the feature point detection and matching module 200 is configured to perform feature point detection and matching on the preprocessed multiple groups of parallax images to obtain multiple matched feature points, and screen the multiple matched feature points to obtain multiple candidate homography matrices; a selecting module 300, configured to select a homography matrix in which corresponding interior points in the multiple candidate homography matrices are relatively divergent, as an optimized optimal homography matrix in parallax image registration, where the interior points are formed by the multiple matched feature points; the error compensation module 400 is configured to perform error compensation on an overlapping region in the entire preprocessed multiple groups of parallax images after quasi-densification is performed on the registration errors of the sparse feature points; the gray value calculation module 500 is configured to equally divide the overlapping area in the seamlessly spliced multiple groups of parallax images into at least two parts, and calculate gray values of image pixel points in the overlapping area of the at least two parts, respectively, so as to improve a fade-in and fade-out fusion algorithm; and the image fusion module 600 is configured to process the seamlessly spliced image by using an improved fade-in and fade-out fusion algorithm.
In addition, in order to achieve the above object, the present invention further provides a mobile terminal, where the mobile terminal includes a memory, a processor, and a multi-purpose machine vision image stitching program stored in the memory and executable on the processor, and when the processor executes the multi-purpose machine vision image stitching program, the steps of the multi-purpose machine vision image stitching method as described above are implemented.
In addition, to achieve the above object, the present invention further provides a computer readable storage medium, where a multi-purpose machine vision image stitching program is stored, and when being executed by a processor, the multi-purpose machine vision image stitching program implements the steps of the multi-purpose machine vision image stitching method as described above.
Compared with the prior art, the method for splicing the multi-view machine vision images, the mobile terminal and the storage medium provided by the invention comprise the following steps: collecting a plurality of groups of parallax images, and preprocessing the plurality of groups of parallax images; carrying out feature point detection and matching on the preprocessed multiple groups of parallax images to obtain a plurality of matched feature points, and screening the plurality of matched feature points to obtain a plurality of candidate homography matrixes; selecting a homography matrix with relatively divergent corresponding interior points in the candidate homography matrices as an optimized optimal homography matrix in parallax image registration; after quasi-densification is carried out on the registration errors of the sparse feature points, error compensation is carried out on the overlapping regions in the multiple groups of preprocessed parallax images; and the optimized suture line searching method seamlessly splices the multiple groups of parallax images. The optimal homography matrix is selected through optimization, and the optimized suture line searching method is adopted to carry out seamless splicing on the multiple groups of parallax images, so that the problem of poor image splicing effect is solved.
In summary, the present invention provides a method, an apparatus and a storage medium for stitching multi-view machine vision images, wherein the method includes: collecting a plurality of groups of parallax images, and preprocessing the plurality of groups of parallax images; carrying out feature point detection and matching on the preprocessed multiple groups of parallax images to obtain a plurality of matched feature points, and screening the plurality of matched feature points to obtain a plurality of candidate homography matrixes; selecting a homography matrix with relatively divergent corresponding interior points in the candidate homography matrices as an optimized optimal homography matrix in parallax image registration; after quasi-densification is carried out on the registration errors of the sparse feature points, error compensation is carried out on the overlapping regions in the multiple groups of preprocessed parallax images; and the optimized suture line searching method seamlessly splices the multiple groups of parallax images. The optimal homography matrix is selected through optimization, and the optimized suture line searching method is adopted to carry out seamless splicing on the multiple groups of parallax images, so that the problem of poor image splicing effect is solved.
It should be understood that equivalents and modifications of the technical solution and inventive concept thereof may occur to those skilled in the art, and all such modifications and alterations should fall within the scope of the appended claims.
Claims (10)
1. A multi-view machine vision image splicing method is characterized by comprising the following steps:
acquiring a plurality of groups of parallax images, and preprocessing the plurality of groups of parallax images;
carrying out feature point detection and matching on the preprocessed multiple groups of parallax images to obtain a plurality of matched feature points, and screening the matched feature points to obtain a plurality of candidate homography matrixes;
selecting a homography matrix with relatively divergent corresponding interior points in the candidate homography matrices as an optimized optimal homography matrix in parallax image registration, wherein the interior points are formed by the matched feature points;
after quasi-densification is carried out on the registration error of the sparse feature points, error compensation is carried out on the overlapping area in the plurality of groups of preprocessed parallax images;
the optimized suture line searching method seamlessly splices the multiple groups of parallax images.
2. The method for stitching multi-view machine vision images according to claim 1, wherein the step of performing feature point detection and matching on the plurality of groups of preprocessed parallax images to obtain a plurality of matched feature points, and the step of screening the plurality of matched feature points to obtain a plurality of candidate homography matrices specifically comprises:
detecting and matching the feature points in the plurality of groups of preprocessed parallax images by adopting an SIFT algorithm to obtain a plurality of matched feature points;
and screening the plurality of matched feature points by using a RANSAC algorithm to obtain a plurality of candidate homography matrixes.
3. The method for stitching multi-view machine vision images according to claim 1, wherein the step of selecting the homography matrix with relatively divergent corresponding interior points from the plurality of candidate homography matrices as the optimized optimal homography matrix in the parallax image registration specifically comprises:
and selecting a homography matrix corresponding to the relative divergence of the inner points in the candidate homography matrixes as an optimized optimal homography matrix in the parallax image registration according to the distribution condition of the inner point set.
4. The method for stitching multi-view machine vision images according to claim 1, wherein after quasi-densifying the registration error of the sparse feature points, the step of performing error compensation on the overlapped regions in the entire preprocessed multi-group parallax images specifically comprises:
and performing quasi-densification on the registration error of the sparse feature points in the interior point set by adopting an interpolation algorithm, and performing error compensation on the overlapped area in the plurality of groups of preprocessed parallax images.
5. The method for stitching multi-view machine vision images according to claim 1, wherein the step of seamlessly stitching the plurality of groups of parallax images by the optimized suture line finding method specifically comprises:
optimizing a suture searching method by taking the distance between the inner point and the suture as the characteristic of weight constraint;
and seamlessly splicing the multiple groups of parallax images by adopting an optimized suture line searching method.
6. The multi-purpose machine vision image stitching method according to claim 1, wherein the step of seamlessly stitching the plurality of sets of parallax images by the optimized stitch line finding method comprises:
equally dividing the overlapping area in the seamlessly spliced multiple groups of parallax images into at least two parts, and respectively calculating the gray value of image pixel points in the overlapping area of the at least two parts so as to improve a gradual-in and gradual-out fusion algorithm;
and processing the seamlessly spliced pictures by adopting an improved gradually-in and gradually-out fusion algorithm.
7. The method for stitching multi-view machine vision images according to claim 1, wherein the step of acquiring a plurality of sets of parallax images and preprocessing the plurality of sets of parallax images specifically comprises:
acquiring a plurality of groups of parallax images by adopting at least two binocular vision cameras;
and denoising, distortion correction and image brightness unified processing are carried out on the multiple groups of parallax images to obtain the preprocessed multiple groups of parallax images.
8. The method for stitching multi-view machine vision images according to claim 2, wherein the step of detecting and matching the feature points in the plurality of groups of preprocessed parallax images by using an SIFT algorithm to obtain a plurality of matched feature points specifically comprises:
detecting extreme points of a scale space, wherein the scale space is obtained by multi-scale feature simulation of the feature points;
positioning the position and the scale of the key point;
determining the direction of the key point;
generating a feature point descriptor according to the key points;
and searching two correctly matched feature points according to the similarity between the feature point descriptors in different images to obtain a plurality of matched feature points.
9. A multi-purpose machine vision image stitching apparatus, comprising:
the preprocessing module is used for acquiring a plurality of groups of parallax images and preprocessing the plurality of groups of parallax images;
the feature point detection and matching module is used for carrying out feature point detection and matching on the preprocessed multiple groups of parallax images to obtain a plurality of matched feature points, and screening the plurality of matched feature points to obtain a plurality of candidate homography matrixes;
a selecting module, configured to select a homography matrix in which corresponding interior points in the multiple candidate homography matrices are relatively divergent, as an optimized optimal homography matrix in parallax image registration, where the interior points are formed by the multiple matched feature points;
the error compensation module is used for performing error compensation on an overlapping region in the plurality of groups of preprocessed parallax images after quasi-densification is performed on the registration errors of the sparse feature points;
the gray value calculation module is used for equally dividing the overlapping area in the seamlessly spliced multiple groups of parallax images into at least two parts, and respectively calculating the gray value of the image pixel points in the overlapping area of the at least two parts so as to improve a gradual-in and gradual-out fusion algorithm;
and the image fusion module is used for processing the seamlessly spliced images by adopting an improved gradually-in and gradually-out fusion algorithm.
10. A computer-readable storage medium, having stored thereon a multi-purpose machine-vision image stitching program, which, when executed by a processor, performs the steps of the multi-purpose machine-vision image stitching method according to any one of claims 1 to 8.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202111089334.7A CN113793266A (en) | 2021-09-16 | 2021-09-16 | Multi-view machine vision image splicing method, system and storage medium |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202111089334.7A CN113793266A (en) | 2021-09-16 | 2021-09-16 | Multi-view machine vision image splicing method, system and storage medium |
Publications (1)
Publication Number | Publication Date |
---|---|
CN113793266A true CN113793266A (en) | 2021-12-14 |
Family
ID=78878666
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202111089334.7A Pending CN113793266A (en) | 2021-09-16 | 2021-09-16 | Multi-view machine vision image splicing method, system and storage medium |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN113793266A (en) |
Cited By (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN113160048A (en) * | 2021-02-02 | 2021-07-23 | 重庆高新区飞马创新研究院 | Suture line guided image splicing method |
CN114742707A (en) * | 2022-04-18 | 2022-07-12 | 中科星睿科技(北京)有限公司 | Multi-source remote sensing image splicing method and device, electronic equipment and readable medium |
CN117291804A (en) * | 2023-09-28 | 2023-12-26 | 武汉星巡智能科技有限公司 | Binocular image real-time splicing method, device and equipment based on weighted fusion strategy |
Citations (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN110349086A (en) * | 2019-07-03 | 2019-10-18 | 重庆邮电大学 | A kind of image split-joint method of non-concentric image-forming condition |
US20210082086A1 (en) * | 2019-09-12 | 2021-03-18 | Nikon Corporation | Depth-based image stitching for handling parallax |
-
2021
- 2021-09-16 CN CN202111089334.7A patent/CN113793266A/en active Pending
Patent Citations (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN110349086A (en) * | 2019-07-03 | 2019-10-18 | 重庆邮电大学 | A kind of image split-joint method of non-concentric image-forming condition |
US20210082086A1 (en) * | 2019-09-12 | 2021-03-18 | Nikon Corporation | Depth-based image stitching for handling parallax |
Non-Patent Citations (3)
Title |
---|
徐鹏: "双目视觉的图像配准与拼接及其应用", 中国优秀硕士学位论文全文数据库 (电子期刊), 15 January 2020 (2020-01-15) * |
曹楠: "基于SIFT特征匹配的图像无缝拼接算法", 计算机与应用化学, 28 February 2011 (2011-02-28) * |
罗永涛: "结合最佳缝合线和改进渐入渐出法的图像拼接算法", 《红外技术》, 30 April 2018 (2018-04-30) * |
Cited By (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN113160048A (en) * | 2021-02-02 | 2021-07-23 | 重庆高新区飞马创新研究院 | Suture line guided image splicing method |
CN114742707A (en) * | 2022-04-18 | 2022-07-12 | 中科星睿科技(北京)有限公司 | Multi-source remote sensing image splicing method and device, electronic equipment and readable medium |
CN114742707B (en) * | 2022-04-18 | 2022-09-27 | 中科星睿科技(北京)有限公司 | Multi-source remote sensing image splicing method and device, electronic equipment and readable medium |
CN117291804A (en) * | 2023-09-28 | 2023-12-26 | 武汉星巡智能科技有限公司 | Binocular image real-time splicing method, device and equipment based on weighted fusion strategy |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
Li et al. | A multiple-camera system calibration toolbox using a feature descriptor-based calibration pattern | |
CN110211043B (en) | Registration method based on grid optimization for panoramic image stitching | |
CN109360235B (en) | Hybrid depth estimation method based on light field data | |
CN110969668A (en) | Stereoscopic calibration algorithm of long-focus binocular camera | |
CN109544456A (en) | The panorama environment perception method merged based on two dimensional image and three dimensional point cloud | |
CN113793266A (en) | Multi-view machine vision image splicing method, system and storage medium | |
CN108734657B (en) | Image splicing method with parallax processing capability | |
CN107578376B (en) | Image splicing method based on feature point clustering four-way division and local transformation matrix | |
CN109035170B (en) | Self-adaptive wide-angle image correction method and device based on single grid image segmentation mapping | |
CN113920205B (en) | Calibration method of non-coaxial camera | |
CN109118544B (en) | Synthetic aperture imaging method based on perspective transformation | |
CN105005964B (en) | Geographic scenes panorama sketch rapid generation based on video sequence image | |
CN111553939B (en) | Image registration algorithm of multi-view camera | |
CN110544202A (en) | parallax image splicing method and system based on template matching and feature clustering | |
EP0895189A1 (en) | Method for recovering radial distortion parameters from a single camera image | |
Kruger et al. | In-factory calibration of multiocular camera systems | |
CN110136048B (en) | Image registration method and system, storage medium and terminal | |
CN116132610A (en) | Fully-mechanized mining face video stitching method and system | |
CN115456870A (en) | Multi-image splicing method based on external parameter estimation | |
CN111815511A (en) | Panoramic image splicing method | |
CN114219866A (en) | Binocular structured light three-dimensional reconstruction method, reconstruction system and reconstruction equipment | |
CN117197241B (en) | Robot tail end absolute pose high-precision tracking method based on multi-eye vision | |
CN110910457B (en) | Multispectral three-dimensional camera external parameter calculation method based on angular point characteristics | |
CN109754435B (en) | Camera online calibration method based on small target fuzzy image | |
CN117635421A (en) | Image stitching and fusion method and device |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination |