[go: up one dir, main page]
More Web Proxy on the site http://driver.im/

CN104766319B - Lifting night takes pictures the method for image registration accuracy - Google Patents

Lifting night takes pictures the method for image registration accuracy Download PDF

Info

Publication number
CN104766319B
CN104766319B CN201510155826.XA CN201510155826A CN104766319B CN 104766319 B CN104766319 B CN 104766319B CN 201510155826 A CN201510155826 A CN 201510155826A CN 104766319 B CN104766319 B CN 104766319B
Authority
CN
China
Prior art keywords
flash
under
image
pixel
condition
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Expired - Fee Related
Application number
CN201510155826.XA
Other languages
Chinese (zh)
Other versions
CN104766319A (en
Inventor
宋彬
陈鹏
秦浩
蒋国良
王博
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Xidian University
Original Assignee
Xidian University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Xidian University filed Critical Xidian University
Priority to CN201510155826.XA priority Critical patent/CN104766319B/en
Publication of CN104766319A publication Critical patent/CN104766319A/en
Application granted granted Critical
Publication of CN104766319B publication Critical patent/CN104766319B/en
Expired - Fee Related legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Landscapes

  • Image Analysis (AREA)

Abstract

It is a kind of to lift night and take pictures the method for image registration accuracy, specifically include:It is input into image subject to registration;Image gray processing;Histogram equalization;Detection characteristic point;Matching characteristic point;Using the consistent RANSAC algorithms of random sampling, Feature Points Matching pair by mistake is rejected;Judge whether the characteristic point of the image shot under the conditions of flash lamp is excessively concentrated, if so, Feature Points Matching is carried out to equalization, otherwise, directly using the Feature Points Matching for having detected to solving affine transformation matrix;Feature Points Matching is to equalization;Using Feature Points Matching to listing equation, equation group is solved, obtain affine transformation matrix;Registering image.The present invention can be according to the positional information of the Feature Points Matching pair for having detected, the addition matching of self adaptation is right so that the distribution of characteristic point is more uniform, it is to avoid over-fitting problem caused by excessively being concentrated due to characteristic point, improves night and takes pictures the registration accuracy of image.

Description

Method for improving registration precision of night photographed image
Technical Field
The invention belongs to the technical field of image processing, and further relates to a method for improving the registration precision of a night photographed image in the technical field of image registration. The invention is used for the registration preprocessing of improving the night photographing quality by fusing the single-frame flash lamp image and the multi-frame non-flash lamp image, can effectively avoid ghost images and blurs, and greatly improves the quality of image fusion.
Background
At present, methods for registering images photographed at night mainly include a frequency domain-based method and a time domain-based method. The most representative method based on time domain is a feature-based method, which extracts feature points from an image to be registered according to a feature point extraction algorithm and performs image registration by using the extracted feature points. The characteristic-based method has the advantages of scale invariance, strong robustness and insensitivity to weak and uniform illumination change. However, the method still has the disadvantage that during night photographing, the situation that the foreground is very bright and the background is very dark occurs in the photograph when the flash lamp is turned on, and the whole image is dark when the photograph is taken without the flash lamp, so that during registration, the number of detected feature points during feature point detection is very small, and is not enough to solve an affine transformation matrix, or the feature points are particularly concentrated in a small area, and an overfitting situation is caused during the affine transformation matrix calculation.
A feature point-based Image registration method is proposed in the article "Image registration using BP-SIFT" (Journal of Visual Communication and Image registration, Volume 24, Issue 4, May 2013, Pages 448 and 457) published by Yingxuan Zhu, Samuel Cheng, Vladimir Stankovic and Lina Stankovic. The method comprises the steps of firstly extracting feature points of an image to be registered according to a BP-SIFT (belief Propagation Scale Invariantfeature transform) algorithm, then matching the obtained feature points to obtain matching pairs, obtaining a transformation matrix according to the matching pairs, and finally resampling according to the transformation matrix to obtain the final registered image. The method has a good registration effect on the image to be registered with small illumination change, but has the defects that few extracted characteristic points appear on the image to be registered with large illumination intensity change, or the extracted characteristic points are too concentrated, so that overfitting is caused, and the registration accuracy is reduced.
The patent of Shanghai university of transportation "Harris corner-based image registration method" (application date: 2009, 9, 4, application No.: 200910195131.9, publication No.: 101655982) discloses a method for image registration. The method comprises the steps of calculating a scale space of an image to be registered, solving a Harris corner in the scale space, carrying out iterative processing on the Harris corner in the scale space by using an affine form modification technology, matching feature points by using a descriptor and a matching method, and realizing registration through matching. The method has the disadvantages that when the method is applied to the registration of images of a flash lamp which is shot at night and images of a flash lamp which is not shot, the distribution of illumination is not considered, so that the detected characteristic points are not distributed uniformly, overfitting is caused, and the images are registered only in a certain area.
Disclosure of Invention
The invention aims to provide a method for improving the registration accuracy of flash and non-flash images, aiming at the defects of the prior art.
According to the method, aiming at the problems that the foreground is too bright and the background is too dark when a flash lamp is turned on during shooting at night and the whole shooting is too dark under the condition without the flash lamp, histogram equalization is firstly carried out on two images to be registered, the gray distribution concentration is improved, the contrast of the images is enhanced, scale invariant feature transformation SIFT feature point detection and matching are then carried out, mismatching points are removed by using a random sampling consistent Randac algorithm, the over-fitting problem caused by the over-concentration of the feature points is avoided, the obtained matching pairs are used, the matching pairs are added in a self-adaptive mode, the distribution of the feature points is more uniform, and finally, affine transformation matrixes are obtained by using the matching pairs and the images are registered.
In order to achieve the purpose, the invention comprises the following main steps:
(1) inputting an image to be registered:
respectively inputting an image to be registered, which is shot under the condition of a flash lamp and an image shot under the condition of no flash lamp;
(2) graying of an image:
graying the images shot under the flash condition and the non-flash condition to be registered respectively according to the following formula:
wherein, YiRepresenting the gray value of the ith pixel in the images taken with and without flash to be registered, i representing the serial numbers of the pixel points of the images taken with and without flash to be registered, B, G, R representing the blue, green and red channels of the images taken with and without flash to be registered, BiBlue channel, G, representing the ith pixel of images taken with and without flash to be registerediGreen channel, R, representing the ith pixel of images taken with and without flash to be registerediA red channel representing the ith pixel of an image taken under flash and no flash conditions to be registered;
(3) histogram equalization:
histogram equalization is performed on images to be registered taken under a flash condition and under a no-flash condition respectively according to the following formula:
sx=int[(L-1)*px+0.5];
wherein p isxRepresenting the cumulative sum of probability values of the final gray level appearance of the luminance channel matrix, x representing the gray level of the luminance channel matrix, x having a value ranging from 0 to 255, ∑ representing the summing operation, f representing the gray level of the luminance channel matrix, f being 0,1,2xRepresenting the mapped value of the gray value x in the luminance channel matrix after histogram equalization, int representsRounding operation, wherein L represents the maximum value of the gray level of the brightness channel matrix;
(4) detecting the characteristic points:
(4a) filtering images to be registered under the condition of a flash lamp and the condition of no flash lamp by Gaussian filters with different scales to obtain images, and forming a sub-octave; by analogy, downsampling images to be registered under the condition of a flash lamp and the condition of no flash lamp for one time, two times and three times respectively, performing similar filtering operation to obtain Gaussian pyramid image layers, and subtracting adjacent image layers to obtain a differential Gaussian pyramid;
(4b) in a difference Gaussian pyramid, comparing the size of a pixel point on a middle layer with 8 adjacent pixel points of the same scale layer, and the size of the pixel point with 18 adjacent pixel points of upper and lower adjacent scale layers, and if the value of the pixel point on the middle layer is the maximum value or the minimum value, taking the pixel point as a candidate feature point;
(4c) removing candidate feature points with low contrast and unstable edge response which are sensitive to noise, and the rest are final feature points;
(4d) calculating the gradient direction of a neighborhood pixel taking the final feature point as the center, representing by using a histogram, wherein the peak value of the histogram represents the main direction of the gradient of the neighborhood pixel of the final feature point, and the main direction of the gradient of the neighborhood pixel is taken as the direction of the final feature point;
(4e) taking the final feature point as a center, selecting a 16 × 16 neighborhood, dividing the neighborhood into 16 4 × 4 subregions, and calculating gradient accumulated values of 8 directions of 0 °, 45 °, 135 °, 180 °, 225 °, 270 °, 315 °, 360 ° on each subregion to generate a 128-dimensional feature vector;
(5) matching the feature points:
for each final feature point in the image shot under the flash lamp condition, finding two feature points which are closest to the final feature point of the image shot under the flash lamp condition in the image shot under the flash lamp condition by utilizing the Euclidean distance, wherein in the two feature points, if the ratio of the closest distance to the next closest distance is less than 0.4, the final feature point of the image shot under the flash lamp condition is matched with the point closest to the final feature point in the image shot under the flash lamp condition, and otherwise, the final feature points are not matched;
(6) rejecting mischaracteristic point matching pairs by using a random sample consensus (RANSAC) algorithm;
(7) judging whether the characteristic points of the image shot under the condition of the flash lamp meet the judgment condition, if so, executing the step (8), otherwise, executing the step (9);
(8) matching and balancing the feature points:
(8a) calculating the average offset of the matching pairs of the feature points of the images to be registered in the column direction and the row direction under the flash condition and the non-flash condition according to the following formula:
wherein, DeltaxAn average offset amount in a column direction of feature point matching pairs of images photographed under flash light conditions and under no flash light conditions to be registered, x represents a column direction of feature points of the images photographed under flash light conditions and under no flash light conditions to be registered, n represents a total number of feature point matching pairs of the images photographed under flash light conditions and under no flash light conditions to be registered, i represents a serial number of the feature point matching pairs of the images photographed under flash light conditions and under no flash light conditions to be registered,feature point column coordinates of images shot under the strobe condition in the pair are matched by feature points representing images shot under the strobe condition and the no-strobe condition to be registered,feature point column coordinates, Δ, of images taken under no flash in a feature point matching pair representing images taken under flash and no flash conditions of the ith to be registeredyAn average shift amount in the row direction of a matching pair of feature points representing images taken under flash conditions and under no flash conditions to be registered, y represents the row direction of the feature points of the images taken under flash conditions and under no flash conditions to be registered,the feature points representing the images taken under flash and no flash conditions to be registered match the line coordinates of the feature points of the image taken under flash in the pair,the row coordinates of the characteristic points of the images shot under the condition of no flash lamp in the matching pair of the characteristic points of the images shot under the condition of the ith flash lamp to be registered and the condition of no flash lamp are represented;
(8b) the image taken under flash conditions to be registered is divided into M × M equally sized sub-blocks according to the following equation:
where HW represents the width of the sub-block,denotes a rounding-down operation, W denotes a width of an image photographed under a flash, M denotes the number of sub-blocks per one line of the image photographed under a flash, HH denotes a height of the sub-blocks, and H denotes a height of the sub-blocksThe height of the image;
(8c) the row coordinates and column coordinates of the feature points to be added to the image taken under flash are calculated as follows:
wherein,column coordinates indicating feature points to be added, x indicates the column direction of the feature points to be added, k indicates the serial number of the feature points to be added in an image captured under flash conditions, k ═ (i × M + j) × N × N + i1 × N + j1, i indicates the number of the row corresponding to the sub-block of the same size, i ═ 0,1, 2., (M-1, i 1) indicates the number of the row corresponding to the feature point to be added in the sub-block, i1 ═ 0,1, 2., (N-1, j) indicates the number of the corresponding column of the sub-block of the same size, j ═ 0,1, 2., (M-1, j 1) indicates the number of the corresponding column of the feature points to be added in the sub-block, j1 ═ 0,1, 2., (N-1), HW indicates the width of the sub-block, and D indicates the width of the sub-blockxA distance in a column direction between feature points to be added,w represents the width of an image photographed under a flash, M represents the number of sub-blocks per line of the image photographed under a flash, N represents the number of feature points added per line of each sub-block of the image photographed under a flash,line coordinates representing the feature points to be added, y the line direction of the feature points to be added, HH the height of the sub-block, DyIndicates the distance in the line direction between the feature points to be added, and H indicates the height of an image taken under flash conditionsThe degree of the magnetic field is measured,h represents the height of the image shot under the condition of the flash lamp, M represents the number of sub-blocks of each line of the image shot under the flash lamp, and N represents the number of feature points added to each line of each sub-block of the image shot under the flash lamp;
(8d) the column coordinates and the row coordinates of the feature points to be added to the image taken under the no-flash condition are calculated according to the following formula:
wherein,the column coordinates of the feature points to be added to the image photographed under the no-flash condition are indicated, x indicates the column direction of the feature points to be added, k indicates the serial number of the feature points to be added,column coordinates, Delta, representing the characteristic points to be added to an image taken under flash conditionsxRepresenting the average amount of shift of the feature point matching pairs in the column direction of images taken under flash conditions and under no flash conditions to be registered,line coordinates representing feature points to be added to an image taken under a no-flash condition,line coordinates, Delta, representing the characteristic points to be added to an image taken under flash conditionsyIndicating flash to be registeredMatching the characteristic points of the images shot under the lamp condition and the non-flash lamp condition with the average offset of the characteristic points in the row direction;
(9) listing equations by using the feature point matching pairs obtained in the step (6) and the step (8), and solving an equation set to obtain an affine transformation matrix H;
(10) registering the images:
(10a) and calculating the pixel of the position (i, j) of the image shot under the flash lamp condition after registration according to the following formula, and calculating the position of the image shot under the corresponding flash lamp condition after mapping:
where i' denotes the column coordinates of the pixels of the image taken under flash conditions, H-1 1,1First row first column element of inverse matrix representing affine transformation matrix, H-1 1,2First row and second column elements of an inverse matrix representing an affine transformation matrix, H-1 1,3The first row and the third column of elements, H, of the inverse of the affine transformation matrix-1 2,1Second row first column element, H, of an inverse of an affine transformation matrix-1 2,2Second row and second column elements of an inverse matrix representing an affine transformation matrix, H-1 2,3Second row and third column elements of the inverse of the affine transformation matrix, H-1 3,1Third row, first column element, H, of the inverse of the affine transformation matrix-1 3,2Third row, second column element, H, of the inverse of the affine transformation matrix-1 3,3The third row and third column elements of the inverse of the affine transformation matrix, i the column coordinates of the pixels of the image taken under the registered flash, j the flash after registrationThe row coordinates of the image pixels photographed under the light condition, j' represents the row coordinates of the image pixels photographed under the flash light condition;
(10b) the pixel value of the position (i, j) pixel of the image captured under the flash condition after registration is calculated according to the following formula:
Ri,j=α1×FIi,Ij2×FIi,Ij+13×FIi+1,Ij4×FIi+1,Ij+1
wherein R isi,jRepresenting pixel values of the registered image taken under flash conditions, i representing column coordinates of the registered image pixels taken under flash conditions, j representing row coordinates of the registered image pixels taken under flash conditions, α1Representing the weight of the pixel in the upper left corner closest to the pixel of the image taken under flash conditions, FIi,IjDenotes the pixel value of the pixel at the top left corner closest to the pixel of the image taken under flash, Ii denotes the integer part of the column coordinates of the pixel of the image taken under flash, Ij denotes the integer part of the row coordinates of the pixel of the image taken under flash, α2Representing the weight of the pixel in the lower left corner closest to the pixel of the image taken under flash conditions, FIi,Ij+1A pixel value representing a pixel at the lower left corner closest to a pixel of an image photographed under a flash condition, α3Weight of the pixel in the upper right corner nearest to the pixel of the image photographed under flash light, FIi+1,IjIndicating the pixel value of the pixel in the upper right corner closest to the pixel of the image taken under flash conditions, α4Weight, F, of the pixel in the lower right corner closest to the pixel of the image taken under flash conditionsIi+1,Ij+1And represents the pixel value of the pixel at the lower right corner closest to the pixel of the image photographed under the flash condition.
Compared with the prior art, the invention has the following advantages:
firstly, the invention adopts the histogram equalization preprocessing method before the registration of the image to be registered shot at night, thereby overcoming the defect that the prior art extracts too few characteristic points of the image to be registered with great change of illumination intensity, and leading the invention to have the advantages of detecting more characteristic points and improving the registration precision.
Secondly, the invention sets the judgment condition of whether the matching pairs of the characteristic points are concentrated or not by utilizing the standard deviation information of the column coordinates and the row coordinates of the matching pairs of the characteristic points, overcomes the defect that the condition of too concentrated matching pairs of the characteristic points cannot be effectively processed in the prior art, and has the advantages of judging whether the matching pairs are too concentrated or not, then carrying out equalization processing on the characteristic points and improving the registration precision.
Thirdly, the invention can perform equalization processing on the matching pairs of the feature points according to the position information of the detected matching pairs of the feature points, overcomes the defect of over concentration of the feature points extracted from the images to be registered with large illumination intensity variation in the prior art, and has the advantages of avoiding the over-fitting problem when solving the affine transformation matrix and improving the registration precision.
Drawings
FIG. 1 is a flow chart of the present invention;
FIG. 2 is a simulation of the present invention.
Detailed Description
The present invention is described in further detail below with reference to the attached drawing figures.
The steps implemented by the present invention will be described in further detail with reference to the accompanying figure 1:
step 1, inputting an image to be registered.
One image taken with flash and one image taken without flash to be registered are input separately.
And 2, graying the image.
Graying the images shot under the flash condition and the non-flash condition to be registered respectively according to the following formula:
wherein, YiRepresenting the gray value of the ith pixel in the images taken with and without flash to be registered, i representing the serial numbers of the pixel points of the images taken with and without flash to be registered, B, G, R representing the blue, green and red channels of the images taken with and without flash to be registered, BiBlue channel, G, representing the ith pixel of images taken with and without flash to be registerediGreen channel, R, representing the ith pixel of images taken with and without flash to be registerediThe red channel representing the ith pixel of the images taken with and without flash to be registered.
And step 3, histogram equalization.
Histogram equalization is performed on images to be registered taken under a flash condition and under a no-flash condition respectively according to the following formula:
sx=int[(L-1)*px+0.5];
wherein p isxRepresenting the cumulative sum of probability values of the final gray level of the brightness channel matrix, wherein x represents the gray value of the brightness channel matrix, the value range of x is 0-255, ∑ represents the summation operation, and f represents the gray value of the brightness channel matrixDegree, f 0,1,2, x, g (f), which represents the probability value of the occurrence of the final gray level of the luminance channel matrix, sxAnd expressing the mapping value of the gray value x in the luminance channel matrix after histogram equalization, int expresses rounding operation, and L expresses the maximum value of the gray level of the luminance channel matrix.
And 4, detecting the characteristic points.
Filtering images to be registered under the condition of a flash lamp and the condition of no flash lamp by Gaussian filters with different scales to obtain images, and forming a sub-octave; by analogy, down-sampling is respectively carried out once, twice and three times on images to be registered, which are shot under the condition of a flash lamp and under the condition of no flash lamp, similar filtering operation is carried out to obtain Gaussian pyramid image layers, and adjacent image layers are subtracted to obtain a difference Gaussian pyramid.
In the difference Gaussian pyramid, the sizes of the pixel point on the middle layer and 8 adjacent pixel points of the same scale layer are compared, and the sizes of the pixel point and 18 adjacent pixel points of the upper and lower adjacent scale layers are compared, and if the value of the pixel point on the middle layer is the maximum value or the minimum value, the pixel point is used as a candidate feature point.
The candidate feature points with low contrast, which are sensitive to noise, and the candidate feature points with unstable edge response are removed in the following method, and the final feature points are remained.
The method of removing the candidate feature points of low contrast that are sensitive to noise is as follows.
The first step is as follows: calculating the position of the feature point with sub-pixel level precision according to the following formula:
wherein, X' represents the position of the feature point reaching the sub-pixel level precision, D represents the difference Gaussian DOG space, and X represents the position of the feature point.
The second step is that: the value of the difference gaussian space at the feature point position of sub-pixel level accuracy is calculated as follows:
wherein D (X') represents a value of the difference gaussian space at a position of the feature point of the sub-pixel level accuracy, D (X) represents a value of the difference gaussian space at a position of the feature point, D represents a difference gaussian DOG space, and X represents a position of the feature point.
The third step: and (3) retaining the characteristic points which meet the condition that | D (X') | is more than or equal to 0.03, and eliminating the characteristic points which do not meet the condition.
The method of removing candidate feature points having unstable edge responses is as follows.
The first step is as follows: the Hessian matrix was calculated as follows:
wherein H represents a local curvature matrix of a differential Gaussian space, DxxRepresenting the second-order partial derivatives of the differential Gaussian space in the column direction of the candidate feature points, DxyRepresenting the second-order partial derivatives of the differential Gaussian space in the column and row directions of the candidate feature points, DyyAnd the second-order partial derivatives of the difference Gaussian space in the row direction of the candidate characteristic points are represented, x represents the column number of the candidate characteristic points, and y represents the row number of the candidate characteristic points.
The second step is that: calculating the ratio of the large eigenvalue to the small eigenvalue of the Hessian matrix H of Hessian according to the following formula:
wherein r represents the ratio of the large eigenvalue to the small eigenvalue of the Hessian matrix H, α represents the large eigenvalue of the Hessian matrix H, and β represents the small eigenvalue of the Hessian matrix H.
The third step: judging whether the Hessian matrix meets the following conditions:
wherein, tr (H) represents a trace of the Hessian matrix H, det (H) represents a determinant of the Hessian matrix H, and r represents a ratio of a large eigenvalue to a small eigenvalue of the Hessian matrix H.
The fourth step: and reserving the candidate characteristic points meeting the conditions, and eliminating the candidate characteristic points not meeting the conditions.
Calculating the gradient direction of the neighborhood pixels taking the final feature point as the center according to the following formula, representing by using a histogram, wherein the peak value of the histogram represents the main direction of the gradient of the neighborhood pixels of the final feature point, and the main direction of the gradient of the neighborhood pixels is taken as the direction of the final feature point:
where m (x, y) represents a module value of the gradient of the neighborhood pixel, L (x +1, y) represents a value of a pixel on the right of the neighborhood pixel in the gaussian space, L (x-1, y) represents a value of a pixel on the left of the neighborhood pixel in the gaussian space, L (x, y +1) represents a value of a pixel on the lower of the neighborhood pixel in the gaussian space, L (x, y-1) represents a value of a pixel on the upper of the neighborhood pixel in the gaussian space, x represents a column number of the neighborhood pixel, y represents a row number of the neighborhood pixel, θ (x, y) represents a direction of the gradient of the neighborhood pixel, and arctan represents an arctan operation.
With the final feature point as the center, a 16 × 16 neighborhood is selected and divided into 16 4 × 4 sub-regions, and gradient accumulation values in 8 directions of 0 °, 45 °, 135 °, 180 °, 225 °, 270 °, 315 °, 360 ° are calculated on each sub-region, so that a 128-dimensional feature vector can be generated.
And 5, matching the feature points.
And for each final characteristic point in the image shot under the flash lamp condition, finding two characteristic points which are closest to the final characteristic point of the image shot under the flash lamp condition in the image shot under the no-flash lamp condition by utilizing the Euclidean distance, wherein in the two characteristic points, if the ratio of the closest distance to the next closest distance is less than 0.4, the final characteristic point of the image shot under the flash lamp condition is matched with the point which is closest to the distance in the image shot under the no-flash lamp condition, and otherwise, the final characteristic points are not matched.
And 6, eliminating the matching pairs of the error characteristic points by using the random sampling consensus RANSAC algorithm.
The first step is as follows: and randomly selecting 4 feature point matching pairs from the feature point matching pair set.
The second step is that: and (4) listing an equation set according to the selected 4 feature point matching pairs, and solving the equation set to obtain an affine transformation matrix.
The third step: and according to the affine transformation matrix and the Euclidean distance error measurement function, searching a consistent set Consensus meeting the current affine transformation matrix from the feature point matching pair set.
The fourth step: and judging whether the number of the elements in the current consistent set is larger than that of the elements in the optimal consistent set or not, if so, updating the current consistent set into the optimal consistent set, and otherwise, keeping the original optimal consistent set.
The fifth step: the current error probability P is updated.
And a sixth step: and judging whether the updated error probability P is greater than the allowed minimum error probability, if so, executing the first step, and otherwise, taking the optimal consistent set as a final feature point matching pair.
And 7, judging whether the characteristic points of the image shot under the flash lamp condition meet the following judgment conditions, if so, executing the step (8), and otherwise, executing the step (9).
Wherein, VxDenotes the degree of deviation of the characteristic point column coordinates of the image photographed under the flash light condition from the average value, W denotes the width of the image photographed under the flash light condition, VyIndicates the degree to which the feature point row coordinates of the image photographed under the flash condition deviate from the average value, and H indicates the height of the image photographed under the flash condition.
And 8, equalizing the matching pairs of the feature points.
Calculating the average offset of the matching pairs of the feature points of the images to be registered in the column direction and the row direction under the flash condition and the non-flash condition according to the following formula:
wherein, DeltaxAn average offset amount in a column direction of feature point matching pairs of images photographed under flash light conditions and under no flash light conditions to be registered, x represents a column direction of feature points of the images photographed under flash light conditions and under no flash light conditions to be registered, n represents a total number of feature point matching pairs of the images photographed under flash light conditions and under no flash light conditions to be registered, i represents a serial number of the feature point matching pairs of the images photographed under flash light conditions and under no flash light conditions to be registered,feature point column coordinates of images shot under the strobe condition in the pair are matched by feature points representing images shot under the strobe condition and the no-strobe condition to be registered,feature point column coordinates, Δ, of images taken under no flash in a feature point matching pair representing images taken under flash and no flash conditions of the ith to be registeredyAn average shift amount in the row direction of a matching pair of feature points representing images taken under flash conditions and under no flash conditions to be registered, y represents the row direction of the feature points of the images taken under flash conditions and under no flash conditions to be registered,the feature points representing the images taken under flash and no flash conditions to be registered match the line coordinates of the feature points of the image taken under flash in the pair,the line coordinates of the feature points representing the images photographed under the flash condition and under the no-flash condition of the ith to-be-registered are matched with the feature points of the image photographed under the no-flash condition in the pair.
The image taken under flash conditions to be registered is divided into M × M equally sized sub-blocks according to the following equation:
where HW represents the width of the sub-block,denotes a rounding-down operation, W denotes a width of an image photographed under a flash, M denotes the number of sub-blocks per one line of the image photographed under a flash, HH denotes a height of the sub-blocks, and H denotes a height of the image photographed under a flash.
The row coordinates and column coordinates of the feature points to be added to the image taken under flash are calculated as follows:
wherein,column coordinates indicating feature points to be added, x indicates the column direction of the feature points to be added, k indicates the serial number of the feature points to be added in an image captured under flash conditions, k ═ (i × M + j) × N × N + i1 × N + j1, i indicates the number of the row corresponding to the sub-block of the same size, i ═ 0,1, 2., (M-1, i 1) indicates the number of the row corresponding to the feature point to be added in the sub-block, i1 ═ 0,1, 2., (N-1, j) indicates the number of the corresponding column of the sub-block of the same size, j ═ 0,1, 2., (M-1, j 1) indicates the number of the corresponding column of the feature points to be added in the sub-block, j1 ═ 0,1, 2., (N-1), HW indicates the width of the sub-block, and D indicates the width of the sub-blockxA distance in a column direction between feature points to be added,w represents the width of an image photographed under a flash, M represents the number of sub-blocks per line of the image photographed under a flash, N represents the number of feature points added per line of each sub-block of the image photographed under a flash,line coordinates representing the feature points to be added, y the line direction of the feature points to be added, HH the height of the sub-block, DyA distance in a row direction between feature points to be added, H represents a height of an image photographed under a flash condition,h denotes the height of an image captured under flash, M denotes the number of sub-blocks per line of an image captured under flash, and N denotes the number of feature points added per line of sub-blocks of an image captured under flash.
The column coordinates and the row coordinates of the feature points to be added to the image taken under the no-flash condition are calculated according to the following formula:
wherein,the column coordinates of the feature points to be added to the image photographed under the no-flash condition are indicated, x indicates the column direction of the feature points to be added, k indicates the serial number of the feature points to be added,column coordinates, Delta, representing the characteristic points to be added to an image taken under flash conditionsxRepresenting the average amount of shift of the feature point matching pairs in the column direction of images taken under flash conditions and under no flash conditions to be registered,feature points representing the intended addition of an image taken without flashThe line coordinates of (a) are set,line coordinates, Delta, representing the characteristic points to be added to an image taken under flash conditionsyThe characteristic point matching pairs representing images taken under flash conditions and under no flash conditions to be registered are offset in the average in the row direction.
And 9, listing equations by using the feature point matching pairs obtained in the steps 6 and 8, and solving an equation set to obtain an affine transformation matrix H.
Step 10, registering the images.
And calculating the pixel of the position (i, j) of the image shot under the flash lamp condition after registration according to the following formula, and calculating the position of the image shot under the corresponding flash lamp condition after mapping:
where i' denotes the column coordinates of the pixels of the image taken under flash conditions, H-1 1,1First row first column element of inverse matrix representing affine transformation matrix, H-1 1,2First row and second column elements of an inverse matrix representing an affine transformation matrix, H-1 1,3The first row and the third column of elements, H, of the inverse of the affine transformation matrix-1 2,1Second row first column element, H, of an inverse of an affine transformation matrix-1 2,2Second row and second column elements of an inverse matrix representing an affine transformation matrix, H-1 2,3Second row and third column elements of the inverse of the affine transformation matrix, H-1 3,1Third row, first column element, H, of the inverse of the affine transformation matrix-1 3,2Third row, second column element, H, of the inverse of the affine transformation matrix-1 3,3The third row, the third column element of the inverse of the affine transformation matrix is represented, i represents the column coordinates of the image pixels captured under the registered flash condition, j represents the row coordinates of the image pixels captured under the registered flash condition, and j' represents the row coordinates of the image pixels captured under the flash condition.
The pixel value of the position (i, j) pixel of the image captured under the flash condition after registration is calculated according to the following formula:
Ri,j=α1×FIi,Ij2×FIi,Ij+13×FIi+1,Ij4×FIi+1,Ij+1
wherein R isi,jRepresenting pixel values of the registered image taken under flash conditions, i representing column coordinates of the registered image pixels taken under flash conditions, j representing row coordinates of the registered image pixels taken under flash conditions, α1Representing the weight of the pixel in the upper left corner closest to the pixel of the image taken under flash conditions, FIi,IjDenotes the pixel value of the pixel at the top left corner closest to the pixel of the image taken under flash, Ii denotes the integer part of the column coordinates of the pixel of the image taken under flash, Ij denotes the integer part of the row coordinates of the pixel of the image taken under flash, α2Representing the weight of the pixel in the lower left corner closest to the pixel of the image taken under flash conditions, FIi,Ij+1A pixel value representing a pixel at the lower left corner closest to a pixel of an image photographed under a flash condition, α3Weight of the pixel in the upper right corner nearest to the pixel of the image photographed under flash light, FIi+1,IjIndicating the pixel value of the pixel in the upper right corner closest to the pixel of the image taken under flash conditions, α4Weight, F, of the pixel in the lower right corner closest to the pixel of the image taken under flash conditionsIi+1,Ij+1Representing the lower right corner closest to a pixel of an image taken under flash conditionsThe pixel value of the pixel.
The simulation effect of the present invention will be further explained with reference to fig. 2.
1. Simulation data:
the test images to be processed used for the simulation were one frame of an on-flash image and one frame of an off-flash image, which were continuously photographed, the image size was 5312 × 2988, the image had R, G, B three color channels, and each channel rank was 256.
2. Simulation result and analysis:
FIG. 2 is a graph of simulation results of the present invention, wherein FIG. 2(a) is a photograph taken with the flash turned on to be registered; FIG. 2(b) is a photograph taken without flash as a reference frame; FIG. 2(c) is a diagram of the effect of two frame fusion after the registration of the traditional SIFT algorithm; fig. 2(d) is a diagram showing the effect of two-frame fusion after registration according to the present invention.
Comparing the four sub-graphs in the attached figure 2, it can be seen that the double image phenomenon occurs to the light in the floor, the disconnection phenomenon occurs to the vertical bar interval between the windows of the second floor on the left, and the virtual image occurs to the trolley under the floor in the effect graph which is fused after the registration by the traditional SIFT algorithm. The invention can greatly increase the detected matching pairs by utilizing the preprocessing step of histogram equalization, and can solve the problem of local overfitting by adopting an algorithm of adaptive matching pair addition, thereby effectively solving the ghost problem.
In summary, it can be seen that the present invention can improve the registration accuracy of a picture taken with a flash and a picture taken without a flash, and overcome the problem of ghosting when a general SIFT algorithm is applied to the above situations.

Claims (6)

1. A method for improving the registration accuracy of a night photographed image comprises the following steps:
(1) inputting an image to be registered:
respectively inputting an image to be registered, which is shot under the condition of a flash lamp and an image shot under the condition of no flash lamp;
(2) graying of an image:
graying the images shot under the flash condition and the non-flash condition to be registered respectively according to the following formula:
Y i = ( 2365 × B i + 23434 × G i + 6969 × R i ) 32768 ;
wherein, YiRepresenting the gray value of the ith pixel in the images taken with and without flash to be registered, i representing the serial numbers of the pixel points of the images taken with and without flash to be registered, B, G, R representing the blue, green and red channels of the images taken with and without flash to be registered, BiBlue channel, G, representing the ith pixel of images taken with and without flash to be registerediGreen channel, R, representing the ith pixel of images taken with and without flash to be registerediA red channel representing the ith pixel of an image taken under flash and no flash conditions to be registered;
(3) histogram equalization:
histogram equalization is performed on images to be registered taken under a flash condition and under a no-flash condition respectively according to the following formula:
p x = Σ f = 0 x g ( f ) ;
sx=int[(L-1)*px+0.5];
wherein p isxRepresenting the cumulative sum of probability values of the final gray level appearance of the luminance channel matrix, x representing the gray level of the luminance channel matrix, x having a value ranging from 0 to 255, ∑ representing the summing operation, f representing the gray level of the luminance channel matrix, f being 0,1,2xRepresenting a mapping value of a gray value x in the luminance channel matrix after histogram equalization, wherein int represents rounding operation, and L represents the maximum value of the gray level of the luminance channel matrix;
(4) detecting the characteristic points:
(4a) filtering images to be registered under the condition of a flash lamp and the condition of no flash lamp by Gaussian filters with different scales to obtain images, and forming a sub-octave; by analogy, downsampling images to be registered under the condition of a flash lamp and the condition of no flash lamp for one time, two times and three times respectively, performing similar filtering operation to obtain Gaussian pyramid image layers, and subtracting adjacent image layers to obtain a differential Gaussian pyramid;
(4b) in a difference Gaussian pyramid, comparing the size of a pixel point on a middle layer with 8 adjacent pixel points of the same scale layer, and the size of the pixel point with 18 adjacent pixel points of upper and lower adjacent scale layers, and if the value of the pixel point on the middle layer is the maximum value or the minimum value, taking the pixel point as a candidate feature point;
(4c) removing candidate feature points with low contrast and unstable edge response which are sensitive to noise, and the rest are final feature points;
(4d) calculating the gradient direction of a neighborhood pixel taking the final feature point as the center, representing by using a histogram, wherein the peak value of the histogram represents the main direction of the gradient of the neighborhood pixel of the final feature point, and the main direction of the gradient of the neighborhood pixel is taken as the direction of the final feature point;
(4e) taking the final feature point as a center, selecting a 16 × 16 neighborhood, dividing the neighborhood into 16 4 × 4 subregions, and calculating gradient accumulated values of 8 directions of 0 °, 45 °, 135 °, 180 °, 225 °, 270 °, 315 °, 360 ° on each subregion to generate a 128-dimensional feature vector;
(5) matching the feature points:
for each final feature point in the image shot under the flash lamp condition, finding two feature points which are closest to the final feature point of the image shot under the flash lamp condition in the image shot under the flash lamp condition by utilizing the Euclidean distance, wherein in the two feature points, if the ratio of the closest distance to the next closest distance is less than 0.4, the final feature point of the image shot under the flash lamp condition is matched with the point closest to the final feature point in the image shot under the flash lamp condition, and otherwise, the final feature points are not matched;
(6) rejecting mischaracteristic point matching pairs by using a random sample consensus (RANSAC) algorithm;
(7) judging whether the characteristic points of the image shot under the condition of the flash lamp meet the judgment condition, if so, executing the step (8), otherwise, executing the step (9);
(8) matching and balancing the feature points:
(8a) calculating the average offset of the matching pairs of the feature points of the images to be registered in the column direction and the row direction under the flash condition and the non-flash condition according to the following formula:
Δ x = 1 n Σ i = 1 n ( PF i x - PN i x ) , Δ y = 1 n Σ i = 1 n ( PF i y - PN i y ) ;
wherein, DeltaxAn average offset amount in a column direction of feature point matching pairs of images photographed under flash light conditions and under no flash light conditions to be registered, x represents a column direction of feature points of the images photographed under flash light conditions and under no flash light conditions to be registered, n represents a total number of feature point matching pairs of the images photographed under flash light conditions and under no flash light conditions to be registered, i represents a serial number of the feature point matching pairs of the images photographed under flash light conditions and under no flash light conditions to be registered,feature point column coordinates of images shot under the strobe condition in the pair are matched by feature points representing images shot under the strobe condition and the no-strobe condition to be registered,feature point column coordinates, Δ, of images taken under no flash in a feature point matching pair representing images taken under flash and no flash conditions of the ith to be registeredyAn average shift amount in the row direction of a matching pair of feature points representing images taken under flash conditions and under no flash conditions to be registered, y represents the row direction of the feature points of the images taken under flash conditions and under no flash conditions to be registered,feature point matching of images taken under mid-flash condition representing the ith to-be-registered flash condition and no-flash conditionThe coordinates of the line of points are,the row coordinates of the characteristic points of the images shot under the condition of no flash lamp in the matching pair of the characteristic points of the images shot under the condition of the ith flash lamp to be registered and the condition of no flash lamp are represented;
(8b) the image taken under flash conditions to be registered is divided into M × M equally sized sub-blocks according to the following equation:
where HW represents the width of the sub-block,denotes a rounding-down operation, W denotes a width of an image photographed under a flash, M denotes the number of sub-blocks per one line of the image photographed under a flash, HH denotes a height of the sub-blocks, and H denotes a height of the image photographed under a flash;
(8c) the row coordinates and column coordinates of the feature points to be added to the image taken under flash are calculated as follows:
PF k x = j × H W + j 1 × D x ;
PF k y = i × H H + i 1 × D y ;
wherein,column coordinates indicating feature points to be added, x indicates the column direction of the feature points to be added, k indicates the serial number of the feature points to be added in an image captured under flash conditions, k ═ (i × M + j) × N × N + i1 × N + j1, i indicates the number of the row corresponding to the sub-block of the same size, i ═ 0,1, 2., (M-1, i 1) indicates the number of the row corresponding to the feature point to be added in the sub-block, i1 ═ 0,1, 2., (N-1, j) indicates the number of the corresponding column of the sub-block of the same size, j ═ 0,1, 2., (M-1, j 1) indicates the number of the corresponding column of the feature points to be added in the sub-block, j1 ═ 0,1, 2., (N-1), HW indicates the width of the sub-block, and D indicates the width of the sub-blockxA distance in a column direction between feature points to be added,w represents the width of an image photographed under a flash, M represents the number of sub-blocks per line of the image photographed under a flash, N represents the number of feature points added per line of each sub-block of the image photographed under a flash,line coordinates representing the feature points to be added, y the line direction of the feature points to be added, HH the height of the sub-block, DyA distance in a row direction between feature points to be added, H represents a height of an image photographed under a flash condition,h represents the height of the image shot under the condition of the flash lamp, M represents the number of sub-blocks of each line of the image shot under the flash lamp, and N represents the number of feature points added to each line of each sub-block of the image shot under the flash lamp;
(8d) the column coordinates and the row coordinates of the feature points to be added to the image taken under the no-flash condition are calculated according to the following formula:
PN k x = PF k x - Δ x ;
PN k y = PF k y - Δ y ;
wherein,the column coordinates of the feature points to be added to the image photographed under the no-flash condition are indicated, x indicates the column direction of the feature points to be added, k indicates the serial number of the feature points to be added,column coordinates, Delta, representing the characteristic points to be added to an image taken under flash conditionsxRepresenting the average amount of shift of the feature point matching pairs in the column direction of images taken under flash conditions and under no flash conditions to be registered,line coordinates representing feature points to be added to an image taken under a no-flash condition,line coordinates, Delta, representing the characteristic points to be added to an image taken under flash conditionsyAn average offset in the row direction of a matching pair of feature points representing images taken under a flash condition and a no-flash condition to be registered;
(9) listing equations by using the feature point matching pairs obtained in the step (6) and the step (8), and solving an equation set to obtain an affine transformation matrix H;
(10) registering the images:
(10a) and calculating the pixel of the position (i, j) of the image shot under the flash lamp condition after registration according to the following formula, and calculating the position of the image shot under the corresponding flash lamp condition after mapping:
i , = H - 1 1 , 1 * i + H - 1 1 , 2 * j + H - 1 1 , 3 H - 1 3 , 1 * i + H - 1 3 , 2 * j + H - 1 3 , 3 ;
j , = H - 1 2 , 1 * i + H - 1 2 , 2 * j + H - 1 2 , 3 H - 1 3 , 1 * i + H - 1 3 , 2 * j + H - 1 3 , 3 ;
where i' denotes the column coordinates of the pixels of the image taken under flash conditions, H-1 1,1First row first column element of inverse matrix representing affine transformation matrix, H-1 1,2First row and second column elements of an inverse matrix representing an affine transformation matrix, H-1 1,3The first row and the third column of elements, H, of the inverse of the affine transformation matrix-1 2,1Second row first column element, H, of an inverse of an affine transformation matrix-1 2,2Second row and second column elements of an inverse matrix representing an affine transformation matrix, H-1 2,3Second row and third column elements of the inverse of the affine transformation matrix, H-1 3,1Third row, first column element, H, of the inverse of the affine transformation matrix-1 3,2Third row, second column element, H, of the inverse of the affine transformation matrix-1 3,3A third row, third column element of the inverse of the affine transformation matrix, i represents the column coordinates of the image pixels captured under the registered flash condition, j represents the row coordinates of the image pixels captured under the registered flash condition, and j' represents the row coordinates of the image pixels captured under the flash condition;
(10b) the position (i, j) pixel value of the image captured under the flash condition after registration is calculated according to the following formula:
Ri,j=α1×FIi,Ij2×FIi,Ij+13×FIi+1,Ij4×FIi+1,Ij+1
wherein R isi,jRepresenting pixel values of the registered image taken under flash conditions, i representing column coordinates of the registered image pixels taken under flash conditions, j representing row coordinates of the registered image pixels taken under flash conditions, α1Representing the weight of the pixel in the upper left corner closest to the pixel of the image taken under flash conditions, FIi,IjDenotes the pixel value of the pixel at the top left corner closest to the pixel of the image taken under flash, Ii denotes the integer part of the column coordinates of the pixel of the image taken under flash, Ij denotes the integer part of the row coordinates of the pixel of the image taken under flash, α2Representing the weight of the pixel in the lower left corner closest to the pixel of the image taken under flash conditions, FIi,Ij+1A pixel value representing a pixel at the lower left corner closest to a pixel of an image photographed under a flash condition, α3Weight of the pixel in the upper right corner nearest to the pixel of the image photographed under flash light, FIi+1,IjIndicating the pixel value of the pixel in the upper right corner closest to the pixel of the image taken under flash conditions, α4Weight, F, of the pixel in the lower right corner closest to the pixel of the image taken under flash conditionsIi+1,Ij+1And represents the pixel value of the pixel at the lower right corner closest to the pixel of the image photographed under the flash condition.
2. The method for improving the registration accuracy of the night-time photographed image according to claim 1, wherein: the method for removing the candidate feature points with low contrast which are sensitive to noise in the step (4c) specifically comprises the following steps:
the first step is as follows: calculating the position of the feature point with sub-pixel level precision according to the following formula:
X ′ = - ∂ 2 D ∂ X 2 - 1 ∂ D ∂ X ;
wherein X' represents the position of the feature point reaching the sub-pixel level precision, D represents a difference Gaussian DOG space, and X represents the position of the feature point;
the second step is that: the value of the difference gaussian space at the feature point position of sub-pixel level accuracy is calculated as follows:
D ( X ′ ) = D ( X ) + 1 2 ∂ D ∂ X T X ′ ;
wherein D (X') represents a value of the difference gaussian space at a position of the feature point of the sub-pixel level accuracy, D (X) represents a value of the difference gaussian space at a position of the feature point, D represents a difference gaussian DOG space, and X represents a position of the feature point;
the third step: and (3) retaining the characteristic points which meet the condition that | D (X') | is more than or equal to 0.03, and eliminating the characteristic points which do not meet the condition.
3. The method for improving the registration accuracy of the night-time photographed image according to claim 1, wherein: the method for removing the candidate feature points with unstable edge responses in the step (4c) specifically comprises the following steps:
the first step is as follows: the Hessian matrix was calculated as follows:
H = D x x D x y D x y D y y ;
wherein H represents a local curvature matrix of a differential Gaussian space, DxxRepresenting the second-order partial derivatives of the differential Gaussian space in the column direction of the candidate feature points, DxyRepresenting the second-order partial derivatives of the differential Gaussian space in the column and row directions of the candidate feature points, DyyRepresenting the second-order partial derivatives of the difference Gaussian space in the row direction of the candidate characteristic points, wherein x represents the column number of the candidate characteristic points, and y represents the row number of the candidate characteristic points;
the second step is that: calculating the ratio of the large eigenvalue to the small eigenvalue of the Hessian matrix H of Hessian according to the following formula:
r = α β ;
wherein r represents the ratio of the large eigenvalue to the small eigenvalue of the Hessian matrix H, α represents the large eigenvalue of the Hessian matrix H, and β represents the small eigenvalue of the Hessian matrix H;
the third step: judging whether the Hessian matrix meets the following conditions:
T r ( H ) 2 D e t ( H ) < ( r + 1 ) 2 r ;
wherein Tr (H) represents the trace of the Hessian matrix H, Det (H) represents the determinant of the Hessian matrix H, and r represents the ratio of the large eigenvalue to the small eigenvalue of the Hessian matrix H;
the fourth step: and reserving the candidate characteristic points meeting the conditions, and eliminating the candidate characteristic points not meeting the conditions.
4. The method for improving the registration accuracy of the night-time photographed image according to claim 1, wherein: the formula for calculating the modulus and direction of the gradient of the neighborhood pixels with the final feature point as the center in the step (4d) is as follows:
m ( x , y ) = ( L ( x + 1 , y ) - L ( x - 1 , y ) ) 2 + ( L ( x , y + 1 ) - L ( x , y - 1 ) ) 2 ) &theta; ( x , y ) = arctan ( L ( x , y + 1 ) - L ( x , y - 1 ) L ( x + 1 , y ) - L ( x - 1 , y ) ) ;
wherein m (x, y) represents a module value of a gradient of a neighborhood pixel centering on the final feature point, L (x +1, y) represents a value of a neighborhood pixel right pixel in the gaussian space centering on the final feature point, L (x-1, y) represents a value of a neighborhood pixel left pixel in the gaussian space centering on the final feature point, L (x, y +1) represents a value of a neighborhood pixel lower than the neighborhood pixel centering on the final feature point, L (x, y-1) represents a value of a neighborhood pixel upper than the neighborhood pixel centering on the final feature point, x represents a column number of a neighborhood pixel centering on the final feature point, y represents a row number of a neighborhood pixel centering on the final feature point, θ (x, y) represents a direction of a gradient of a neighborhood pixel centering on the final feature point, and arctan represents an arctangent operation.
5. The method for improving the registration accuracy of the night-time photographed image according to claim 1, wherein: the random sample consensus RANSAC algorithm in the step (6) comprises the following specific steps:
the first step is as follows: randomly selecting 4 feature point matching pairs from the feature point matching pair set;
the second step is that: listing an equation set according to the selected 4 feature point matching pairs, and solving the equation set to obtain an affine transformation matrix;
the third step: according to the affine transformation matrix and the Euclidean distance error measurement function, searching a consistent set Consensus meeting the current affine transformation matrix from the feature point matching pair set;
the fourth step: judging whether the number of the elements in the current consistent set is larger than that of the elements in the optimal consistent set or not, if so, updating the current consistent set into the optimal consistent set, and otherwise, keeping the original optimal consistent set;
the fifth step: updating the current error probability P;
and a sixth step: and judging whether the updated error probability P is greater than the allowed minimum error probability, if so, executing the first step, and otherwise, taking the optimal consistent set as a final feature point matching pair.
6. The method for improving the registration accuracy of the night-time photographed image according to claim 1, wherein: the determination condition that the feature points of the image captured under the flash condition described in step (7) satisfy is a case where the following conditions are satisfied simultaneously:
V x < W 3 , V y < H 3 ;
wherein, VxDenotes the degree of deviation of the characteristic point column coordinates of the image photographed under the flash light condition from the average value, W denotes the width of the image photographed under the flash light condition, VyIndicates the degree to which the feature point row coordinates of the image photographed under the flash condition deviate from the average value, and H indicates the height of the image photographed under the flash condition.
CN201510155826.XA 2015-04-02 2015-04-02 Lifting night takes pictures the method for image registration accuracy Expired - Fee Related CN104766319B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201510155826.XA CN104766319B (en) 2015-04-02 2015-04-02 Lifting night takes pictures the method for image registration accuracy

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201510155826.XA CN104766319B (en) 2015-04-02 2015-04-02 Lifting night takes pictures the method for image registration accuracy

Publications (2)

Publication Number Publication Date
CN104766319A CN104766319A (en) 2015-07-08
CN104766319B true CN104766319B (en) 2017-06-13

Family

ID=53648128

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201510155826.XA Expired - Fee Related CN104766319B (en) 2015-04-02 2015-04-02 Lifting night takes pictures the method for image registration accuracy

Country Status (1)

Country Link
CN (1) CN104766319B (en)

Families Citing this family (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105718852A (en) * 2015-11-24 2016-06-29 深圳芯启航科技有限公司 Fingerprint image processing method and apparatus
CN105844630B (en) * 2016-03-21 2018-11-16 西安电子科技大学 A kind of image super-resolution fusion denoising method of binocular vision
CN106356757B (en) * 2016-08-11 2018-03-20 河海大学常州校区 A kind of power circuit unmanned plane method for inspecting based on human-eye visual characteristic
CN106898018B (en) * 2017-01-22 2020-05-08 武汉秀宝软件有限公司 Image continuous matching method and system
CN108680900A (en) * 2018-05-21 2018-10-19 武汉科技大学 A kind of ESPRIT sound localization methods based on RANSAC
CN109858479B (en) * 2018-07-10 2022-11-18 上海其高电子科技有限公司 Motor vehicle illegal whistle snapshot system based on image registration
CN112220448B (en) * 2020-10-14 2022-04-22 北京鹰瞳科技发展股份有限公司 Fundus camera and fundus image synthesis method
CN112907486B (en) * 2021-03-18 2022-12-09 国家海洋信息中心 Remote sensing image toning method based on deep learning and color mapping

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP1736928A1 (en) * 2005-06-20 2006-12-27 Mitsubishi Electric Information Technology Centre Europe B.V. Robust image registration
CN103839253A (en) * 2013-11-21 2014-06-04 苏州盛景空间信息技术有限公司 Arbitrary point matching method based on partial affine transformation
CN104166967A (en) * 2014-08-15 2014-11-26 西安电子科技大学 Method for improving definition of video image
CN104463775A (en) * 2014-10-31 2015-03-25 小米科技有限责任公司 Device and method for achieving depth-of-field effect of image

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US9147260B2 (en) * 2010-12-20 2015-09-29 International Business Machines Corporation Detection and tracking of moving objects

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP1736928A1 (en) * 2005-06-20 2006-12-27 Mitsubishi Electric Information Technology Centre Europe B.V. Robust image registration
CN103839253A (en) * 2013-11-21 2014-06-04 苏州盛景空间信息技术有限公司 Arbitrary point matching method based on partial affine transformation
CN104166967A (en) * 2014-08-15 2014-11-26 西安电子科技大学 Method for improving definition of video image
CN104463775A (en) * 2014-10-31 2015-03-25 小米科技有限责任公司 Device and method for achieving depth-of-field effect of image

Non-Patent Citations (3)

* Cited by examiner, † Cited by third party
Title
Registration Algorithm of Panchromatic and Multi-Spectral Images Based on SIFT and RANSAC;Li Ailing 等;《Photonics and Optoelectronics(SOPO)》;20120523;第1-4页 *
分辨率差异较大的遥感图像自动配准算法研究;郎博 等;《影像技术》;20140815;第48-50页 *
基于改进SIFT算法的视频图像序列自动拼接;卢斌 等;《测绘科学》;20130131;第38卷(第1期);第23-25页 *

Also Published As

Publication number Publication date
CN104766319A (en) 2015-07-08

Similar Documents

Publication Publication Date Title
CN104766319B (en) Lifting night takes pictures the method for image registration accuracy
CN107578418B (en) Indoor scene contour detection method fusing color and depth information
CN105657402B (en) A kind of depth map restoration methods
CN106683119B (en) Moving vehicle detection method based on aerial video image
CN108596975B (en) Stereo matching algorithm for weak texture region
CN111915704A (en) Apple hierarchical identification method based on deep learning
CN104134200B (en) Mobile scene image splicing method based on improved weighted fusion
CN113298810A (en) Trace detection method combining image enhancement and depth convolution neural network
CN112488046B (en) Lane line extraction method based on high-resolution images of unmanned aerial vehicle
CN113436162B (en) Method and device for identifying weld defects on surface of hydraulic oil pipeline of underwater robot
CN112734761B (en) Industrial product image boundary contour extraction method
CN113744142B (en) Image restoration method, electronic device and storage medium
Zhao et al. Infrared and visible image fusion algorithm based on saliency detection and adaptive double-channel spiking cortical model
CN105046701A (en) Multi-scale salient target detection method based on construction graph
CN108921003A (en) Unmanned plane obstacle detection method based on convolutional neural networks and morphological image
CN107154017A (en) A kind of image split-joint method based on SIFT feature Point matching
CN116977316A (en) Full-field detection and quantitative evaluation method for damage defects of complex-shape component
CN117764983A (en) Visual detection method for binocular identification of intelligent manufacturing production line
CN115082314B (en) Method for splicing optical surface defect images step by adopting self-adaptive feature extraction
CN111915634A (en) Target object edge detection method and system based on fusion strategy
CN104766287A (en) Blurred image blind restoration method based on significance detection
CN108460348A (en) Road target detection method based on threedimensional model
CN105303544A (en) Video splicing method based on minimum boundary distance
CN106683044B (en) Image splicing method and device of multi-channel optical detection system
CN117636273A (en) Night lane line identification method based on logarithmic transformation and gamma correction weighted illumination compensation

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
EXSB Decision made by sipo to initiate substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant
CF01 Termination of patent right due to non-payment of annual fee
CF01 Termination of patent right due to non-payment of annual fee

Granted publication date: 20170613