CN108460724B - Adaptive image fusion method and system based on Mahalanobis distance discrimination - Google Patents
Adaptive image fusion method and system based on Mahalanobis distance discrimination Download PDFInfo
- Publication number
- CN108460724B CN108460724B CN201810114537.9A CN201810114537A CN108460724B CN 108460724 B CN108460724 B CN 108460724B CN 201810114537 A CN201810114537 A CN 201810114537A CN 108460724 B CN108460724 B CN 108460724B
- Authority
- CN
- China
- Prior art keywords
- image
- pixel
- img1
- point
- img2
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active
Links
- 238000007500 overflow downdraw method Methods 0.000 title claims abstract description 9
- 230000003044 adaptive effect Effects 0.000 title claims description 9
- 230000004927 fusion Effects 0.000 claims abstract description 30
- 238000003384 imaging method Methods 0.000 claims abstract description 11
- 230000001131 transforming effect Effects 0.000 claims abstract description 4
- 239000002131 composite material Substances 0.000 claims description 30
- 239000011159 matrix material Substances 0.000 claims description 30
- 230000009466 transformation Effects 0.000 claims description 29
- 238000013507 mapping Methods 0.000 claims description 23
- 238000004364 calculation method Methods 0.000 claims description 10
- 238000001514 detection method Methods 0.000 abstract description 4
- 238000005516 engineering process Methods 0.000 abstract description 3
- 238000004458 analytical method Methods 0.000 abstract description 2
- 238000007499 fusion processing Methods 0.000 abstract description 2
- 230000009467 reduction Effects 0.000 abstract description 2
- 238000000034 method Methods 0.000 description 4
- 238000011160 research Methods 0.000 description 3
- 230000000007 visual effect Effects 0.000 description 3
- 238000010586 diagram Methods 0.000 description 2
- 230000003287 optical effect Effects 0.000 description 2
- 238000007792 addition Methods 0.000 description 1
- 230000009286 beneficial effect Effects 0.000 description 1
- 230000015572 biosynthetic process Effects 0.000 description 1
- 238000009826 distribution Methods 0.000 description 1
- 230000000694 effects Effects 0.000 description 1
- 238000005286 illumination Methods 0.000 description 1
- 238000009776 industrial production Methods 0.000 description 1
- 238000007689 inspection Methods 0.000 description 1
- 238000004519 manufacturing process Methods 0.000 description 1
- 238000012986 modification Methods 0.000 description 1
- 230000004048 modification Effects 0.000 description 1
- 238000004806 packaging method and process Methods 0.000 description 1
- 238000012545 processing Methods 0.000 description 1
- 238000001308 synthesis method Methods 0.000 description 1
- 238000003786 synthesis reaction Methods 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T3/00—Geometric image transformations in the plane of the image
- G06T3/40—Scaling of whole images or parts thereof, e.g. expanding or contracting
- G06T3/4038—Image mosaicing, e.g. composing plane images from plane sub-images
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T5/00—Image enhancement or restoration
- G06T5/50—Image enhancement or restoration using two or more images, e.g. averaging or subtraction
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/20—Special algorithmic details
- G06T2207/20212—Image combination
- G06T2207/20221—Image fusion; Image merging
Landscapes
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Image Processing (AREA)
- Image Analysis (AREA)
Abstract
The invention relates to a self-adaptive image fusion method and a self-adaptive image fusion system based on Mahalanobis distance discrimination, wherein a target image is divided into three parts in the fusion process, the leftmost part of a coincidence region is completely taken from data in img1, the right part of the coincidence region is completely taken from an image obtained by transforming img2, and a proper lens imaging point set is automatically selected and planned by evaluating and selecting the overlapping region part of each camera, so that the fused image is more real, the details are more prominent, and the system detection error caused by detail loss is greatly reduced. The Markov distance discriminant analysis method is introduced into the image fusion technology, noise interference among pixels in the overlapped region coordinates can be effectively overcome, proper camera imaging point sets are automatically selected and planned, the overall smoothness of an image formed by the evaluated pixel points is better, the reduction degree of a real scene is high, and the accuracy of image fusion is greatly improved.
Description
Technical Field
The invention belongs to the field of image processing, and particularly relates to a self-adaptive image fusion method and system based on Mahalanobis distance discrimination.
Background
Machine vision has become a hot topic of industrial field research at present, and visual introduction in an industrial production field can better complete items such as identification, detection, packaging and transportation of workpieces, but when workpiece visual information is obtained, workpiece visual fields needing to be identified in some production items are large, and a single camera cannot well meet actual requirements, so a multi-camera imaging system (as shown in fig. 1) with a parallel optical axis structure is usually adopted, and finally required image data is obtained by using an image stitching algorithm.
The core of the image stitching algorithm is mainly image registration and image fusion, the image registration method is mature after years of research by a large number of researchers, but in the image fusion part, due to complex industrial field environments (illumination, temperature and the like) and assembly errors of an imaging system, imaging results of cameras have certain differences, the synthesis effect of the traditional image synthesis method on the basis is not good, and small detail errors can cause major errors in projects such as workpiece detection and the like. Therefore, it is very critical to determine the preference for the overlapping region of a multi-camera imaging system.
The existing image fusion method is mainly divided into two types: a simple method is that the right image img2 in fig. 2 is transformed into a new image through a transformation matrix H, and then the left image img1 is directly added into the new image, so that a very obvious splicing seam is formed after splicing; another method is to transform the right image img2 into a new image by a transform matrix H, and during the fusion process, divide the target image into three parts, the leftmost part is completely taken from the data in img1, the right part of the overlapped region is completely taken from the transformed image in img2, and the middle overlapped part is a weighted average of the two images, which may result in the loss of the detail part of the overlapped region and also may have some seams. Therefore, neither of the above two methods is applicable in the field of workpiece inspection.
Disclosure of Invention
According to the invention, the overlapped area parts of the cameras are evaluated and preferred, and the proper imaging point sets of the lenses are automatically selected and planned, so that the fused images are more real, the details are more prominent, and the system detection error caused by the loss of the details is greatly reduced.
In order to achieve the above object, the technical solution provided by the present invention is an adaptive image fusion method based on mahalanobis distance discrimination, comprising the following steps:
step 2, determining pixels of the overlapping area of the composite image: defining the set of pixel information of img1 in the overlap region as C1The pixel information set of the overlapped region after img2 transformation is C2Set of pixel information K for the overlapping area of the entire imagem={C1,C2Any point in the image overlapping area coordinates contains two pixel points, namelyWherein n represents the number of pixel points needing to be selected in the overlapping area, and a certain point needing to be filled is set as i, C1iIndicating the pixel information, C, in img1 corresponding to this point coordinate position2iPixel information representing the coordinate position of this point in img 2;
step 3, obtaining C1,C2Pixel mean ofIs composed ofCalculating the pixel mean variance of the image overlapping region:
step 4, respectively obtaining KiInner C1i,C2iMahalanobis distance to the variance of the mean of the pixels in the overlap region, i.e., the similarity difference:
i=1,2,3...n
(1) if it isThe overlapped coordinate point K is determinediInner C1iIs superior to C2iNamely, automatically selecting the pixel point of img1 during image fusion;
(2) if it isThe overlapped coordinate point K is determinediInner C2iIs superior to C1iNamely, automatically selecting pixels after img2 transformation during image fusion;
(3) if it isThen at the overlapping coordinatePoint KiRandomly selecting one pixel point as image fusion;
step 6, integrating all selected pixel pointsAnd one of the two images is mapped into the corresponding coordinate system of the overlapping area to be determined as the overlapping area of the composite image.
Further, the image registration in step 1 is realized by a SIFT algorithm.
Further, the transformation matrix H in step 1 is calculated as follows:
let T be any point on the overlap region, imaged as T1 in img1 and T2 in img 2; from the mapping relationship of the images, H1T 1, H2T 2, and T1T 2 (H1H 2)-1) (ii) a The H matrix represents the mapping relation of the same characteristic points of img1 and img2, and is derived by the following formula: H-H1 × H2-1。
The invention also provides a self-adaptive image fusion system based on Mahalanobis distance discrimination, which comprises the following modules:
the left and right part acquisition module of the synthetic image is used for obtaining the overlapping area of the img1 and the img2 of the two images after the images are registered, calculating a transformation matrix H according to the matched same characteristic points, constructing the mapping relation of the two images in the overlapping area, and transforming the img2 into a new image through the transformation matrix H; establishing frame coordinates of the synthesized image based on the coordinate position of img1, defining pixel information K of img1 excluding the overlapped arealFor the left part of the composite image, the img2 transformed image removes the pixel information K of the overlapping arearTo synthesize the right part of the image, and apply KrAnd KlMapping the frame coordinates of the synthetic image one by one;
a composite image overlap region pixel acquisition module for defining the pixel information set of img1 in the overlap region as C1The pixel information set of the overlapped region after img2 transformation is C2Set of pixel information K for the overlapping area of the entire imagem={C1,C2Any point in the image overlapping area coordinates contains two pixel points, namelyWherein n represents the number of pixel points needing to be selected in the overlapping area, and a certain point needing to be filled is set as i, C1iIndicating the pixel information, C, in img1 corresponding to this point coordinate position2iPixel information representing the coordinate position of this point in img 2;
an overlap region pixel mean variance calculation module for calculating C1,C2Has a pixel mean ofCalculating the pixel mean variance of the image overlapping region:
a similarity difference calculation module for respectively calculating KiInner C1i,C2iMahalanobis distance to the variance of the mean of the pixels in the overlap region, i.e., the similarity difference:
i=1,2,3...n
a similarity difference comparison module for comparing KiInner C1i,C2iDifference of similarity ofThe size of (d);
(1) if it isThe overlapped coordinate point K is determinediInner C1iIs superior to C2iI.e. in the figureAutomatically selecting a pixel point of img1 during image fusion;
(2) if it isThe overlapped coordinate point K is determinediInner C2iIs superior to C1iNamely, automatically selecting pixels after img2 transformation during image fusion;
(3) if it isThen at the overlapping coordinate point KiRandomly selecting one pixel point as image fusion;
a composite image overlap region determining module for integrating all selected pixel pointsAnd one of the two images is mapped into the corresponding coordinate system of the overlapping area to be determined as the overlapping area of the composite image.
Further, image registration in the acquisition module of the left and right parts of the composite image is realized by SIFT algorithm.
Further, the calculation method of the transformation matrix H in the left and right part acquisition modules of the composite image is as follows:
let T be any point on the overlap region, imaged as T1 in img1 and T2 in img 2; from the mapping relationship of the images, H1T 1, H2T 2, and T1T 2 (H1H 2)-1) (ii) a The H matrix represents the mapping relation of the same characteristic points of img1 and img2, and is derived by the following formula: H-H1 × H2-1。
Compared with the prior art, the invention has the advantages and beneficial effects that: the Markov distance discriminant analysis method is introduced into the image fusion technology, noise interference among pixels in the overlapped region coordinates can be effectively overcome, proper camera imaging point sets are automatically selected and planned, the overall smoothness of an image formed by the evaluated pixel points is better, the reduction degree of a real scene is high, and the accuracy of image fusion is greatly improved.
Drawings
Fig. 1 is a multi-camera imaging system with a parallel optical axis configuration.
Fig. 2 is a schematic view of an imaging region in image fusion.
Fig. 3 is a schematic diagram of a pixel point corresponding to a region coordinate in the embodiment of the present invention.
FIG. 4 is a diagram illustrating a distribution of pixel information of an image synthesized according to an embodiment of the present invention.
FIG. 5 is a flow chart of an embodiment of the present invention.
Detailed Description
The technical solution of the present invention is further explained with reference to the drawings and the embodiments.
The calculation mode of the transformation matrix H is as follows: let T be any point on the overlap region, imaged as T1 in img1 and T2 in img 2; from the mapping of the images, H1T-T1, H2T-T2, we obtain: t1 ═ T2(H1 ═ H2-1) (ii) a The H matrix represents the mapping relation of the same characteristic points of img1 and img2, and is derived by the following formula: H-H1 × H2-1The H matrix is a 3 x 3 matrix.
[1] Wenweast, Zhang, research on panoramic image stitching technology based on SIFT algorithm [ J ] computer system application, 2017,26(07):227 + 231.
Step 2, determining pixels of the overlapping area of the composite image: defining the set of pixel information of img1 in the overlap region as C1Img2 transformed overlapping area imageThe element information set is C2Set of pixel information K for the overlapping area of the entire imagem={C1,C2Any point in the image overlapping area coordinates contains two pixel points, namelyn represents the number of pixel points needing to be selected in the overlapping area, the coordinate position of the overlapping area is known, namely the coordinate information of the pixel points needing to be filled in the overlapping area is known, the img1 and the img2 have corresponding pixel information at each position needing to be filled, and a certain point needing to be filled is assumed to be i, C1iIt represents the pixel information corresponding to this point coordinate position in img1, similarly to C2iThe pixel information corresponding to this point coordinate position in img2 is represented; as shown in fig. 3;
step 3, obtaining C1,C2Has a pixel mean ofCalculating the pixel mean variance of the image overlapping region:
step 4, respectively obtaining KiInner Cli,C2iMahalanobis distance to overlap region pixel mean variance (i.e., similarity difference):
i=1,2,3...n
(1) if it isThe overlapped coordinate point K is determinediInner C1iIs superior to C2iI.e. automatically select the pixel point of img1 when fusing images.
(2) If it isThe overlapped coordinate point K is determinediInner C2iIs superior to C1iNamely, the pixel points transformed by img2 are automatically selected during image fusion.
(3) If it isThen at the overlapping coordinate point KiAnd randomly selecting one pixel point as image fusion.
Step 6, integrating all selected pixel pointsAnd one of the two images is mapped into the corresponding overlapping area coordinate system to be determined as the overlapping area of the composite image. Kr,Km*,KlThe pixel points in (1) are mapped one by one in the coordinate frame of the image to be synthesized.
The invention also provides a self-adaptive image fusion system based on Mahalanobis distance discrimination, which comprises the following modules:
the left and right part acquisition module of the synthetic image is used for obtaining the overlapping area of the img1 and the img2 of the two images after the images are registered, calculating a transformation matrix H according to the matched same characteristic points, constructing the mapping relation of the two images in the overlapping area, and transforming the img2 into a new image through the transformation matrix H; establishing frame coordinates of the synthesized image based on the coordinate position of img1, defining pixel information K of img1 excluding the overlapped arealFor the left part of the composite image, the img2 transformed image removes the pixel information K of the overlapping arearFor the right side of the composite imageA moiety of andrand KlMapping the frame coordinates of the synthetic image one by one;
a composite image overlap region pixel acquisition module for defining the pixel information set of img1 in the overlap region as C1The pixel information set of the overlapped region after img2 transformation is C2Set of pixel information K for the overlapping area of the entire imagem={C1,C2Any point in the image overlapping area coordinates contains two pixel points, namelyWherein n represents the number of pixel points needing to be selected in the overlapping area, and a certain point needing to be filled is set as i, C1iIndicating the pixel information, C, in img1 corresponding to this point coordinate position2iPixel information representing the coordinate position of this point in img 2;
an overlap region pixel mean variance calculation module for calculating C1,C2Has a pixel mean ofCalculating the pixel mean variance of the image overlapping region:
a similarity difference calculation module for respectively calculating KiInner C1i,C2iMahalanobis distance to the variance of the mean of the pixels in the overlap region, i.e., the similarity difference:
i=1,2,3 ... n
similarity difference comparison module, useIn comparison KiInner C1i,C2iDifference of similarity ofThe size of (d);
(1) if it isThe overlapped coordinate point K is determinediInner C1iIs superior to C2iNamely, automatically selecting the pixel point of img1 during image fusion;
(2) if it isThe overlapped coordinate point K is determinediInner C2iIs superior to C1iNamely, automatically selecting pixels after img2 transformation during image fusion;
(3) if it isThen at the overlapping coordinate point KiRandomly selecting one pixel point as image fusion;
a composite image overlap region determining module for integrating all selected pixel pointsAnd one of the two images is mapped into the corresponding coordinate system of the overlapping area to be determined as the overlapping area of the composite image.
The image registration in the acquisition module of the left part and the right part of the synthetic image is realized by SIFT algorithm; the calculation method of the transformation matrix H in the acquisition module of the left and right parts of the composite image is as follows:
let T be any point on the overlap region, imaged as T1 in img1 and T2 in img 2; from the mapping relationship of the images, H1T 1, H2T 2, and T1T 2 (H1H 2)-1) (ii) a The H matrix represents the mapping relation of the same characteristic points of img1 and img2, and is derived by the following formula: H-H1 × H2-1The H matrix is a 3 x 3 matrix.
The specific embodiments described herein are merely illustrative of the spirit of the invention. Various modifications or additions may be made to the described embodiments or alternatives may be employed by those skilled in the art without departing from the spirit or ambit of the invention as defined in the appended claims.
Claims (6)
1. The adaptive image fusion method based on Mahalanobis distance discrimination is characterized by comprising the following steps of:
step 1, determining the left part and the right part of a composite image: aiming at an overlapping region of two images img1 and img2 obtained after image registration, a mapping relation of the two images in the overlapping region is constructed after a transformation matrix H is calculated according to matched same feature points, and img2 is transformed into a new image through the transformation matrix H; establishing frame coordinates of the synthesized image based on the coordinate position of img1, defining pixel information K of img1 excluding the overlapped arealFor the left part of the composite image, the img2 transformed image removes the pixel information K of the overlapping arearTo synthesize the right part of the image, and apply KrAnd KlMapping the frame coordinates of the synthetic image one by one;
step 2, determining pixels of the overlapping area of the composite image: defining the set of pixel information of img1 in the overlap region as C1The pixel information set of the overlapped region after img2 transformation is C2Set of pixel information K for the overlapping area of the entire imagem={C1,C2Any point in the image overlapping area coordinates contains two pixel points, namelyWherein n represents the number of pixel points needing to be selected in the overlapping area, and a certain point needing to be filled is set as i, C1iIndicating the pixel information, C, in img1 corresponding to this point coordinate position2iPixel information representing the coordinate position of this point in img 2;
step 3, obtaining C1,C2Has a pixel mean ofCalculating the pixel mean variance of the image overlapping region:
step 4, respectively obtaining KiInner C1i,C2iMahalanobis distance to the variance of the mean of the pixels in the overlap region, i.e., the similarity difference:
(1) if it isThe overlapped coordinate point K is determinediInner C1iIs superior to C2iNamely, automatically selecting the pixel point of img1 during image fusion;
(2) if it isThe overlapped coordinate point K is determinediInner C2iIs superior to C1iNamely, automatically selecting pixels after img2 transformation during image fusion;
(3) if it isThen at the overlapping coordinate point KiRandomly selecting one pixel point as image fusion;
2. The adaptive image fusion method based on mahalanobis distance discrimination as claimed in claim 1, wherein: the image registration in step 1 is realized by SIFT algorithm.
3. The adaptive image fusion method based on mahalanobis distance discrimination as claimed in claim 1, wherein: the transformation matrix H in step 1 is calculated as follows:
let T be any point on the overlap region, imaged as T1 in img1 and T2 in img 2; from the mapping relationship of the images, H1T 1, H2T 2, and T1T 2 (H1H 2)-1) (ii) a The H matrix represents the mapping relation of the same characteristic points of img1 and img2, and is derived by the following formula: H-H1 × H2-1。
4. The adaptive image fusion system based on Mahalanobis distance discrimination is characterized by comprising the following modules:
the left and right part acquisition module of the synthetic image is used for obtaining the overlapping area of the img1 and the img2 of the two images after the images are registered, calculating a transformation matrix H according to the matched same characteristic points, constructing the mapping relation of the two images in the overlapping area, and transforming the img2 into a new image through the transformation matrix H; establishing frame coordinates of the synthesized image based on the coordinate position of img1, defining pixel information K of img1 excluding the overlapped arealFor the left part of the composite image, the img2 transformed image removes the pixel information K of the overlapping arearTo synthesize the right part of the image, and apply KrAnd KlMapping the frame coordinates of the synthetic image one by one;
composite image overlap region pixel acquisitionA fetching module for defining a set of pixel information of img1 in the overlap region as C1The pixel information set of the overlapped region after img2 transformation is C2Set of pixel information K for the overlapping area of the entire imagem={C1,C2Any point in the image overlapping area coordinates contains two pixel points, namelyWherein n represents the number of pixel points needing to be selected in the overlapping area, and a certain point needing to be filled is set as i, C1iIndicating the pixel information, C, in img1 corresponding to this point coordinate position2iPixel information representing the coordinate position of this point in img 2;
an overlap region pixel mean variance calculation module for calculating C1,C2Has a pixel mean ofCalculating the pixel mean variance of the image overlapping region:
a similarity difference calculation module for respectively calculating KiInner C1i,C2iMahalanobis distance to the variance of the mean of the pixels in the overlap region, i.e., the similarity difference:
a similarity difference comparison module for comparing KiInner C1i,C2iDifference of similarity ofThe size of (d);
(1) if it isThe overlapped coordinate point K is determinediInner C1iIs superior to C2iNamely, automatically selecting the pixel point of img1 during image fusion;
(2) if it isThe overlapped coordinate point K is determinediInner C2iIs superior to C1iNamely, automatically selecting pixels after img2 transformation during image fusion;
(3) if it isThen at the overlapping coordinate point KiRandomly selecting one pixel point as image fusion;
5. The mahalanobis distance discrimination-based adaptive image fusion system of claim 4, wherein: and image registration in the acquisition module of the left part and the right part of the synthetic image is realized by SIFT algorithm.
6. The mahalanobis distance discrimination-based adaptive image fusion system of claim 4, wherein: the calculation method of the transformation matrix H in the acquisition module of the left and right parts of the composite image is as follows:
let T be any point on the overlap region, imaged as T1 in img1 and T2 in img 2; the mapping relationship of imaging is obtained as H1T 1,h2 ═ T2, resulting in T1 ═ T2(H1 ═ H2-1) (ii) a The H matrix represents the mapping relation of the same characteristic points of img1 and img2, and is derived by the following formula: H-H1 × H2-1。
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201810114537.9A CN108460724B (en) | 2018-02-05 | 2018-02-05 | Adaptive image fusion method and system based on Mahalanobis distance discrimination |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201810114537.9A CN108460724B (en) | 2018-02-05 | 2018-02-05 | Adaptive image fusion method and system based on Mahalanobis distance discrimination |
Publications (2)
Publication Number | Publication Date |
---|---|
CN108460724A CN108460724A (en) | 2018-08-28 |
CN108460724B true CN108460724B (en) | 2021-10-22 |
Family
ID=63239714
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201810114537.9A Active CN108460724B (en) | 2018-02-05 | 2018-02-05 | Adaptive image fusion method and system based on Mahalanobis distance discrimination |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN108460724B (en) |
Families Citing this family (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN109544498A (en) * | 2018-11-29 | 2019-03-29 | 燕山大学 | A kind of image adaptive fusion method |
CN112801871B (en) * | 2021-01-29 | 2024-04-05 | 中汽创智科技有限公司 | Image self-adaptive fusion method based on Chebyshev distance discrimination |
CN115953332B (en) * | 2023-03-15 | 2023-08-18 | 四川新视创伟超高清科技有限公司 | Dynamic image fusion brightness adjustment method, system and electronic equipment |
Family Cites Families (10)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP2004295691A (en) * | 2003-03-27 | 2004-10-21 | Pentax Corp | Method, program, and device for image quality determination |
US20090010507A1 (en) * | 2007-07-02 | 2009-01-08 | Zheng Jason Geng | System and method for generating a 3d model of anatomical structure using a plurality of 2d images |
CN101951487B (en) * | 2010-08-19 | 2012-06-27 | 深圳大学 | Panoramic image fusion method, system and image processing equipment |
CN101984463A (en) * | 2010-11-02 | 2011-03-09 | 中兴通讯股份有限公司 | Method and device for synthesizing panoramic image |
US20140267684A1 (en) * | 2013-03-15 | 2014-09-18 | Chemlmage Corporation | System and method for detecting contamination in food using hyperspectral imaging |
CN103226708B (en) * | 2013-04-07 | 2016-06-29 | 华南理工大学 | A kind of multi-model fusion video hand division method based on Kinect |
CN103294024B (en) * | 2013-04-09 | 2015-07-08 | 宁波杜亚机电技术有限公司 | Intelligent home system control method |
CN103279923B (en) * | 2013-06-14 | 2015-12-23 | 西安电子科技大学 | Based on topography's method for amalgamation processing of overlapping region |
CN107330922A (en) * | 2017-07-04 | 2017-11-07 | 西北工业大学 | Video moving object detection method of taking photo by plane based on movable information and provincial characteristics |
CN107369148B (en) * | 2017-09-20 | 2019-09-10 | 湖北工业大学 | Based on the multi-focus image fusing method for improving SML and Steerable filter |
-
2018
- 2018-02-05 CN CN201810114537.9A patent/CN108460724B/en active Active
Also Published As
Publication number | Publication date |
---|---|
CN108460724A (en) | 2018-08-28 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN111047510B (en) | Large-field-angle image real-time splicing method based on calibration | |
CN109308693B (en) | Single-binocular vision system for target detection and pose measurement constructed by one PTZ camera | |
CN110853100B (en) | Structured scene vision SLAM method based on improved point-line characteristics | |
Liao et al. | Model-free distortion rectification framework bridged by distortion distribution map | |
CN102313536B (en) | Method for barrier perception based on airborne binocular vision | |
US11488322B2 (en) | System and method for training a model in a plurality of non-perspective cameras and determining 3D pose of an object at runtime with the same | |
CN111553939B (en) | Image registration algorithm of multi-view camera | |
US11843865B2 (en) | Method and device for generating vehicle panoramic surround view image | |
CN103607554A (en) | Fully-automatic face seamless synthesis-based video synthesis method | |
CN108460724B (en) | Adaptive image fusion method and system based on Mahalanobis distance discrimination | |
CN107154014A (en) | A kind of real-time color and depth Panorama Mosaic method | |
CN101354796B (en) | Omnidirectional stereo vision three-dimensional rebuilding method based on Taylor series model | |
CN109544498A (en) | A kind of image adaptive fusion method | |
Fang et al. | Self-supervised camera self-calibration from video | |
CN112767546B (en) | Binocular image-based visual map generation method for mobile robot | |
CN105574875B (en) | A kind of fish eye images dense stereo matching process based on polar geometry | |
Fan et al. | Human-m3: A multi-view multi-modal dataset for 3d human pose estimation in outdoor scenes | |
CN105303554A (en) | Image feature point 3D reconstruction method and device | |
Zhang et al. | Building a stereo and wide-view hybrid RGB/FIR imaging system for autonomous vehicle | |
CN107067368B (en) | Streetscape image splicing method and system based on deformation of image | |
CN106971381B (en) | A kind of wide angle camera visual field line of demarcation generation method with the overlapping ken | |
CN115839719A (en) | Lightweight SLAM (Simultaneous localization and mapping) segmented mapping method | |
CN114608558A (en) | SLAM method, system, device and storage medium based on feature matching network | |
CN109089100B (en) | Method for synthesizing binocular stereo video | |
Zhang et al. | Effective video frame acquisition for image stitching |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |