CN113570523B - Method for automatic generation and extrapolation of relative radiation correction coefficients for optical images - Google Patents
Method for automatic generation and extrapolation of relative radiation correction coefficients for optical images Download PDFInfo
- Publication number
- CN113570523B CN113570523B CN202110877407.2A CN202110877407A CN113570523B CN 113570523 B CN113570523 B CN 113570523B CN 202110877407 A CN202110877407 A CN 202110877407A CN 113570523 B CN113570523 B CN 113570523B
- Authority
- CN
- China
- Prior art keywords
- camera
- gray value
- yaw
- radiation correction
- correction coefficient
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active
Links
- 238000012937 correction Methods 0.000 title claims abstract description 105
- 230000005855 radiation Effects 0.000 title claims abstract description 86
- 238000000034 method Methods 0.000 title claims abstract description 29
- 238000013213 extrapolation Methods 0.000 title claims abstract description 27
- 230000003287 optical effect Effects 0.000 title claims abstract description 21
- 239000000523 sample Substances 0.000 claims abstract description 65
- 230000010354 integration Effects 0.000 claims abstract description 18
- 230000001186 cumulative effect Effects 0.000 claims description 18
- 230000000694 effects Effects 0.000 claims description 12
- 230000003321 amplification Effects 0.000 claims description 10
- 238000006243 chemical reaction Methods 0.000 claims description 10
- 238000003199 nucleic acid amplification method Methods 0.000 claims description 10
- 238000004364 calculation method Methods 0.000 claims description 9
- RVRCFVVLDHTFFA-UHFFFAOYSA-N heptasodium;tungsten;nonatriacontahydrate Chemical compound O.O.O.O.O.O.O.O.O.O.O.O.O.O.O.O.O.O.O.O.O.O.O.O.O.O.O.O.O.O.O.O.O.O.O.O.O.O.O.[Na+].[Na+].[Na+].[Na+].[Na+].[Na+].[Na+].[W].[W].[W].[W].[W].[W].[W].[W].[W].[W].[W] RVRCFVVLDHTFFA-UHFFFAOYSA-N 0.000 claims description 6
- 230000008569 process Effects 0.000 claims description 6
- 230000009471 action Effects 0.000 claims description 4
- 239000000654 additive Substances 0.000 claims description 3
- 230000000996 additive effect Effects 0.000 claims description 3
- 230000001419 dependent effect Effects 0.000 claims description 3
- 238000012545 processing Methods 0.000 claims description 3
- 238000012546 transfer Methods 0.000 claims description 3
- 238000010408 sweeping Methods 0.000 claims 2
- 238000003384 imaging method Methods 0.000 description 6
- 230000004044 response Effects 0.000 description 4
- 238000010586 diagram Methods 0.000 description 2
- 238000012986 modification Methods 0.000 description 2
- 230000004048 modification Effects 0.000 description 2
- 238000007619 statistical method Methods 0.000 description 2
- 238000009825 accumulation Methods 0.000 description 1
- 238000004458 analytical method Methods 0.000 description 1
- 230000009286 beneficial effect Effects 0.000 description 1
- 230000008859 change Effects 0.000 description 1
- 238000013461 design Methods 0.000 description 1
- 238000011161 development Methods 0.000 description 1
- 239000000284 extract Substances 0.000 description 1
- 238000005286 illumination Methods 0.000 description 1
- 230000006872 improvement Effects 0.000 description 1
- 238000009394 selective breeding Methods 0.000 description 1
- 238000012549 training Methods 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T5/00—Image enhancement or restoration
- G06T5/90—Dynamic range modification of images or parts thereof
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T5/00—Image enhancement or restoration
- G06T5/40—Image enhancement or restoration using histogram techniques
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/90—Determination of colour characteristics
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/97—Determining parameters from multiple pictures
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/10—Image acquisition modality
- G06T2207/10004—Still image; Photographic image
Landscapes
- Engineering & Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Theoretical Computer Science (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Image Processing (AREA)
Abstract
The invention discloses a relative radiation correction coefficient automatic generation and extrapolation method for an optical image, which comprises the following steps: acquiring an optical image by using a camera on the satellite, analyzing auxiliary data in yaw calibration data downloaded by the satellite, and obtaining the current gear gain G of the camera on the satellitecurAnd the current integration order IcurAnd setting a target gear gain G of the camera on the satellitetgtAnd target integration order Itgt(ii) a Performing oblique line correction on the yaw image data; calculating a gray level histogram of each probe element and a comprehensive histogram of all probe elements, and matching the histograms to obtain a relative radiation correction coefficient of the current gear of the camera; performing linear fitting on the relative radiation correction coefficient to obtain a linear radiation correction coefficient; and calculating the relative radiation correction coefficient of the target gear of the camera on the satellite according to a linear extrapolation formula. The method does not depend on manual image selection, can effectively reduce calibration workload, has higher accuracy of the generated relative radiometric calibration coefficient, and improves the quality of the calibration coefficient.
Description
Technical Field
The invention belongs to the field of image processing, and relates to a relative radiation correction coefficient automatic generation and extrapolation method for an optical image.
Background
Image relative radiation correction is a fundamental image data processing work, and is an extremely important way for improving the image quality. Compared with the scaling method, the statistical method does not need special training data, and directly extracts a correction coefficient from the image through analysis and statistics, thereby correcting the image. However, the conventional statistical method has many disadvantages: the selection of the image depends on artificial selection, so that the image has strong randomness, and the accuracy of the generated relative radiometric calibration coefficient is not high; the statistical work of a large number of images consumes a large amount of time; because a large number of images are needed, the images need to be accumulated firstly, the correction effect cannot be ensured during the image accumulation, and the timeliness is very low, so that the requirements of high-frequency and fine relative radiation calibration of the existing high-resolution large-view-field optical image cannot be met.
With the development of agility of a camera platform, a yaw imaging mode provides important guarantee for high-precision relative radiation correction. For the yaw imaging mode, because each probe element images the same ground object in a short time, the terrain, the illumination condition, the underlying surface, the atmospheric scattering and the like do not form influence factors, and the gray distribution difference of each probe element can be simply considered to be caused by the inconsistent response of the device.
In practical application, the solar incident angle of the sensor during shooting can change along with latitude, time and the like; meanwhile, the landform and the landform of the shooting area have global differences. In order to make the image in the proper gray scale range, the number of integration steps and the number of gain steps are set through a preset lookup table according to the shooting time and the shooting position. Different combinations of integration stages and gain stages form different gears. The relative radiation correction coefficients between different gears have a correlation, but there is also a certain difference. Too many gears result in a large increase in the amount of work relative to radiometric calibration, but if the amount of data for some gears is too small, the quality of the calibration coefficients will be poor. Therefore, the method for automatically generating and extrapolating the relative radiation correction coefficient is provided by researching the relative radiation coefficient conversion among different gears, and has important significance for reducing the calibration workload and improving the quality of the calibration coefficient.
Disclosure of Invention
Aiming at the problem of timely updating of relative radiation correction coefficients of multiple gears of an optical load, the relative radiation correction coefficients of a few gears are utilized, and an effective and timely relative radiation correction coefficient extrapolation generation method is disclosed.
The invention discloses a relative radiation correction coefficient automatic generation and extrapolation method for an optical image, which comprises the following steps:
s1, acquiring optical images by using the camera on the satellite, analyzing auxiliary data in yaw calibration data downloaded by the satellite, and obtaining the current gear gain G of the camera on the satellitecurAnd the current integration order IcurAnd setting a target gear gain G of the camera on the satellitetgtAnd target integration order Itgt;
S2, the yaw image data is corrected to have a diagonal line, and two triangles on the upper left and lower right sides of the yaw image are cut out. After the yaw angle of the satellite is rotated by 90 degrees, shooting by using a camera on the satellite to obtain yaw image data, and correcting a drift angle caused by earth rotation, so that the direction of a CCD sensor linear array of the camera on the satellite is parallel to the push-broom direction of a satellite motion orbit, shooting by using the camera along the push-broom direction of the satellite motion orbit to obtain yaw image data, wherein the yaw image data are N pixels in the column direction, correspond to N probes of the CCD sensor of the camera, the yaw image data are M pixels in the row direction, correspond to M rows of push-broom imaging of the CCD sensor of the camera, M is more than N, the pixels in the yaw image are numbered from 1 in the sequence from top to bottom and from left to right, if the row number (i, j) of a certain pixel of the yaw image meets the condition that i + j is less than N, the pixel is not counted into the actual statistical area of the yaw image data, namely, cutting off the left upper triangle of the yaw image; if the current pixel row number (i, j) of the yaw image meets the condition that i + j is larger than M, the pixel is not counted in the actual statistical area of the yaw image data, namely, the lower right triangle of the yaw image is cut off, so as to obtain the actual statistical area of the yaw image data, and the oblique line correction process is expressed by a formula as follows:
where Stat (Pixel (i, j)) represents a statistical identification of the Pixel at the ith row and the jth column in the yaw image, False represents that the Pixel is not counted in the actual statistical area of the yaw image data, True represents that the Pixel is counted in the actual statistical area of the yaw image data, and Pixel (i, j) represents the Pixel at the ith row and the jth column in the yaw image.
S3, calculating a gray level histogram of each probe element, calculating a comprehensive histogram of all probe elements, matching the histograms, and obtaining a relative radiation correction coefficient of the current gear of the camera;
the step S3 specifically includes, for the probe element in the jth row of the yaw image, calculating a gray histogram P of the image obtained by the single probe elementjAnd the cumulative probability density function Sj,1≤j≤N;
For the gray histogram PjWhen the gray value is k, the corresponding gray histogram Pj(k) The calculation formula of (2) is as follows:
Pj(k)=mj(k)/Mj,
in the formula, mj(k) Obtaining the number of pixels in the image with a gray value equal to k, M, for the jth probejObtaining the total pixel number of the image for the jth detecting element, and then obtaining the gray level histogram PjWhen the gray value of a pixel is k, the corresponding cumulative probability density function Sj(k) Comprises the following steps:
wherein l is the gray scale value of the pixel. And combining the gray level histograms of all the probe elements to obtain a comprehensive histogram P of all the probe elements. For the synthetic histogram P, when the gray value is k, the calculation formula of the corresponding synthetic histogram P (k) is:
for the comprehensive histogram P of all the probe elements, when the gray value of a pixel is k, the corresponding cumulative probability density function v (k) is:
in the histogram matching process, the comprehensive histograms of all the probe elements are expected histograms, and a lookup table is established according to the expected histograms, so that the probability density function of the comprehensive histogram of each probe element after matching is the same as the probability density function of the expected histogram.
According to the gray histogram P of the jth probe elementjCorresponding cumulative probability density function SjCalculating the relative radiation correction coefficient of the jth probe element according to the cumulative probability density function V corresponding to the expected histogram, and recording the relative radiation correction coefficient of the jth probe element in the gray level lookup table TjIn the formula, j is more than or equal to 1 and less than or equal to N. T isjAnd representing the gray level lookup table corresponding to the jth probe element.
For the gray histogram PjCumulative probability density function S with k gray value of middle pixelj(k) Starting from the gray value 0, the gray value l satisfying the following condition is searched for gray value by gray value in the cumulative probability density function V corresponding to the desired histogram:
V(l)≤Sj(k)≤V(l+1),
after finding the gray value l, at | V (l) -Sj(k)|-|V(l+1)-Sj(k) When | is less than or equal to 0, the gray level lookup table TjRelative radiation correction factor T of j-th probe element recorded at pixel gray value kj(k) Comprises the following steps:
Tj(k)=l,
at | V (l) -Sj(k)|-|V(l+1)-Sj(k) When | is greater than 0, the gray level lookup table TjRelative radiation correction factor T recorded at pixel grey value kj(k) Comprises the following steps:
Tj(k)=l+1,
the above-described operation procedure is performed for each probe element.
S4, performing linear fitting on the relative radiation correction coefficient obtained in the step S3 to obtain a linear radiation correction coefficient;
relative radiation correction coefficient T for j-th probe element at pixel gray value kj(k) The corresponding gray value k satisfies k ∈ [0, DNMax]DNMax is the upper limit of the gray value, and when linear fitting is carried out, the range [0, DNMax ] is required to be selected]Taking the middle gray value as the center, making start as the lower limit of the linear fitting value, and end as the upper limit of the linear fitting value, and the calculation formula is as follows:
where int () is a floor operation.
Selecting k from [ start, end ∈ ]]Taking the gray value k as an independent variable and taking the corresponding relative radiation correction coefficient T as the corresponding relative radiation correction coefficientj(k) Taking the parameters as dependent variables, and calculating a linear fitting parameter A of the jth probe element according to a least square fitting formulajAnd Bj:
Obtaining a linear relative radiation correction coefficient y by linear fittingjComprises the following steps:
wherein, yj(k) And the linear relative radiation correction coefficient of the jth probe element at the gray value k is shown.
And S5, calculating the relative radiation correction coefficient of the target gear of the camera on the satellite according to a linear extrapolation formula.
Step S4 obtains the gain level G of the current gear of the camera on the satellitecurIntegral order IcurLinear relative radiation correction coefficient y of time correspondencecurFor the linear relative radiation correction coefficient ycurLinear extrapolation is carried out to obtain the gain stage G of the target geartgtIntegral order ItgtRelative radiation correction coefficient y of timetgt;
The gain stage has an exponential amplification on the gray-scale values of the image, which is expressed as:
DNout1=uG·DNin,
wherein DNinIs to input a gray value, DNout1Is the output gray scale value under the action of the gain stage, u is a basic gain parameter determined by the camera, and G is the gain stage.
The camera CCD sensor adopts a TDI CCD sensor which is an array formed by a plurality of linear array optical sensors. The yaw image shot by the camera is formed by overlapping a plurality of linear array scanning images, the integral progression is used for indicating the number of the linear array scanning images, the integral progression has a linear amplification effect on the gray value of the image, and the amplification effect is expressed as follows:
DNout2=I·DNin,
wherein I is the number of integration stages, DNout2Is the output gray value under the effect of the number of integration steps.
Under the combined influence of the integral series and the gain series, the corresponding output gray value DNoutComprises the following steps:
DNout=IuGDNin
=IuG(k0DNr+b),
=IuGk0DNr+IuGb
wherein DNrRepresenting the input grey value, k, without noise0Is multiplicative noise and b is additive noise.
Conversion function f (DN)out) For outputting grey values DNoutConversion to IuGDNrThe expression is as follows,
the relative radiation correction coefficient is used for characterizing the conversion function, and for a certain gray value x, the linear extrapolation formula of the corresponding relative radiation correction coefficient y is as follows:
y=Kx+B,
where K is the gain of the linear versus radiometric correction factor and B is the offset of the linear versus radiometric correction factor according to the transfer function f (DN)out) Obtaining: k is 1/K0,B=-(IuGb)/k0. The integral level corresponding to the reference gear coefficient of the camera is assumed to be IsThe number of gain stages corresponding to the reference gear coefficient of the camera is GsThe integral series corresponding to the extrapolation gear coefficient of the camera is IpThe number of gain stages corresponding to the extrapolated gear coefficients of the camera is GpThen the extrapolated gear coefficient y of the camera0Expressed as:
gain stage number G for current gear coefficientcurIntegral series I of current gear coefficientcurAnd the gain stage number G of the target gear coefficienttgtIntegral number of stages I of target gear coefficienttgtIts corresponding extrapolated relative radiation correction coefficient ytgtParameter A oftgtAnd BtgtRespectively as follows:
when a gray value k is further obtained, the relative radiation correction coefficient y of the gear is extrapolatedtgt(k) Comprises the following steps:
thereby obtaining the relative radiometric correction factor of the target gear of the camera on the satellite.
The invention has the beneficial effects that:
compared with the traditional on-orbit statistical relative radiation correction method, the method does not depend on manual image selection, has universality, and has higher accuracy of the generated relative radiation calibration coefficient; the method does not need to count a large number of images, and can be used only by counting a small number of yaw image data; because a large amount of images are not needed to be accumulated, calibration data can be obtained by one-time yaw photographing, the timeliness is very high, and the high-frequency and fine relative radiation calibration requirements of the existing high-resolution large-view-field optical image can be met; by using the relative radiation correction coefficient of a single gear obtained by one-time yaw calibration, the relative radiation correction coefficients of other gears can be obtained by extrapolation, the calibration workload is effectively reduced, and the quality of the calibration coefficient is improved.
Drawings
FIG. 1 is a schematic view of a yaw imaging mode;
FIG. 2 is a schematic flow chart of an embodiment of the present invention;
FIG. 3 is a schematic diagram of skew correction of yaw image data.
Detailed Description
For a better understanding of the present disclosure, an example is given here.
The invention discloses a relative radiation correction coefficient automatic generation and extrapolation method aiming at an optical image, and FIG. 1 is a schematic view of a yaw imaging mode; FIG. 2 is a schematic flow chart of an embodiment of the present invention; FIG. 3 is a schematic diagram of skew correction of yaw image data. In fig. 3, (a) is an original yaw image, and (b) is a yaw image after skew correction.
The method comprises the following specific steps:
s1, acquiring optical images by using the camera on the satellite, analyzing auxiliary data in yaw calibration data downloaded by the satellite, and obtaining the current gear gain G of the camera on the satellitecurAnd the current integration order IcurAnd setting a target gear gain G of the camera on the satellitetgtAnd target integration order Itgt;
S2, performing a skew correction on the yaw image data (or referred to as yaw strip data).
Since each of the line CCD detectors does not simultaneously image the same feature but sequentially passes through the same feature, the yaw image data shows an oblique line of about 45 degrees on the image, and the oblique line indicates the same feature, as shown in fig. 3 (a). In the initial and final stages of shooting, the detector shoots different ground objects, the data of the yaw image is corrected in a diagonal manner, and two triangles on the upper side and the lower side of the yaw image are cut off. After the yaw angle of the satellite is rotated by 90 degrees, shooting by using a camera on the satellite to obtain yaw image data, and correcting a drift angle caused by earth rotation, so that the direction of a CCD sensor linear array of the camera on the satellite is parallel to the push-broom direction of a satellite motion orbit, shooting by using the camera along the push-broom direction of the satellite motion orbit to obtain yaw image data, wherein the yaw image data are N pixels in the column direction, correspond to N probes of the CCD sensor of the camera, the yaw image data are M pixels in the row direction, correspond to M rows of push-broom imaging of the CCD sensor of the camera, M is more than N, the pixels in the yaw image are numbered from 1 in the sequence from top to bottom and from left to right, if the row number (i, j) of a certain pixel of the yaw image meets the condition that i + j is less than N, the pixel is not counted into the actual statistical area of the yaw image data, namely, cutting off the left upper triangle of the yaw image; if the current pixel row number (i, j) of the yaw image meets the condition that i + j is larger than M, the pixel is not counted in the actual statistical area of the yaw image data, namely, the lower right triangle of the yaw image is cut off, so as to obtain the actual statistical area of the yaw image data, and the oblique line correction process is expressed by a formula as follows:
where Stat (Pixel (i, j)) represents a statistical identification of the Pixel at the ith row and the jth column in the yaw image, False represents that the Pixel is not counted in the actual statistical area of the yaw image data, True represents that the Pixel is counted in the actual statistical area of the yaw image data, and Pixel (i, j) represents the Pixel at the ith row and the jth column in the yaw image.
S3, calculating a gray level histogram of each probe element, calculating a comprehensive histogram of all probe elements, matching the histograms, and obtaining a relative radiation correction coefficient of the current gear of the camera;
the step S3 specifically includes, for the jth probe element (or the jth probe element of the sensor) in the jth row of the yaw image, calculating a gray histogram P of the image obtained by the single probe elementjAnd the cumulative probability density function Sj,1≤j≤N;
For the gray histogram PjWhen the gray value is k, the corresponding gray histogram Pj(k) The calculation formula of (2) is as follows:
Pj(k)=mj(k)/Mj,
in the formula, mj(k) Obtaining the number of pixels in the image with a gray value equal to k, M, for the jth probejObtaining the total pixel number of the image for the jth detecting element, and then obtaining the gray level histogram PjWhen the gray value of a pixel is k, the corresponding cumulative probability density function Sj(k) Comprises the following steps:
wherein l is the gray scale value of the pixel. And combining the gray level histograms of all the probe elements to obtain a comprehensive histogram P of all the probe elements. For the synthetic histogram P, when the gray value is k, the calculation formula of the corresponding synthetic histogram P (k) is:
for the comprehensive histogram P of all the probe elements, when the gray value of a pixel is k, the corresponding cumulative probability density function v (k) is:
in the histogram matching process, the comprehensive histograms of all the probe elements are expected histograms, and a lookup table is established according to the expected histograms, so that the probability density function of the comprehensive histogram of each probe element after matching is the same as the probability density function of the expected histogram.
According to the gray histogram P of the jth probe elementjCorresponding cumulative probability density function SjCalculating the relative radiation correction coefficient of the jth probe element according to the cumulative probability density function V corresponding to the expected histogram, and recording the relative radiation correction coefficient of the jth probe element in the gray level lookup table TjIn the formula, j is more than or equal to 1 and less than or equal to N. T isjAnd representing the gray level lookup table corresponding to the jth probe element.
For the gray histogram PjCumulative probability density function S with k gray value of middle pixelj(k) Starting from the gray value 0, the gray value l satisfying the following condition is searched for gray value by gray value in the cumulative probability density function V corresponding to the desired histogram:
V(l)≤Sj(k)≤V(l+1),
after finding the gray value l, at | V (l) -Sj(k)|-|V(l+1)-Sj(k) When | is less than or equal to 0, the gray level lookup table TjRelative radiation correction factor T of j-th probe element recorded at pixel gray value kj(k) Comprises the following steps:
Tj(k)=l,
at | V (l) -Sj(k)|-|V(l+1)-Sj(k) When | is greater than 0, the gray level lookup table TjRelative radiation correction factor T recorded at pixel grey value kj(k) Comprises the following steps:
Tj(k)=l+1,
the above-described operation procedure is performed for each probe element.
S4, performing linear fitting on the relative radiation correction coefficient (gray level lookup table) obtained in the step S3 to obtain a linear radiation correction coefficient;
the relative radiance correction factor generated in step S3 generally will not be linear over the entire gray scale range, and for the convenience of subsequent gear extrapolation, linear fitting is required here, and the linear fitting parameter a is calculatedj,Bj。
Relative radiation correction coefficient T for j-th probe element at pixel gray value kj(k) The corresponding gray value k satisfies k ∈ [0, DNMax]DNMax is the upper limit of the gray value, and when linear fitting is carried out, the range [0, DNMax ] is required to be selected]Taking the middle gray value as the center, making start as the lower limit of the linear fitting value, and end as the upper limit of the linear fitting value, and the calculation formula is as follows:
where int () is a floor operation.
Selecting k from [ start, end ∈ ]]Taking the gray value k as an independent variable and taking the corresponding relative radiation correction coefficient T as the corresponding relative radiation correction coefficientj(k) Taking the parameters as dependent variables, and calculating a linear fitting parameter A of the jth probe element according to a least square fitting formulajAnd Bj:
Obtaining a linear relative radiation correction coefficient y by linear fittingjComprises the following steps:
wherein, yj(k) And the linear relative radiation correction coefficient of the jth probe element at the gray value k is shown.
And S5, calculating the relative radiation correction coefficient of the target gear of the camera on the satellite according to a linear extrapolation formula.
Step S4 obtains the gain level G of the current gear of the camera on the satellitecurIntegral order IcurLinear relative radiation correction coefficient y of time correspondencecurFor the linear relative radiation correction coefficient ycurLinear extrapolation is carried out to obtain the gain stage G of the target geartgtIntegral order ItgtTime relative radiation correctionCoefficient ytgt;
The gear is the combination of the number of gain stages and the number of integration stages, and is set to ensure that the image has as large a gray scale range as possible, and to make the gray scale range in the middle position as much as possible. The number of gain stages and the number of integration stages have different effects on the image gradation. The gain stage has an exponential amplification on the gray-scale values of the image, which is expressed as:
DNout1=uG·DNin,
wherein DNinIs to input a gray value, DNout1Is the output gray value under the action of the gain series, u is a gain basic parameter determined by the camera, the value of which is 1.2 or 1.25, different sensors have different design values, and G is the gain series.
The camera CCD sensor adopts a TDI CCD sensor which is an array formed by a plurality of linear array optical sensors. The yaw image shot by the camera is formed by overlapping a plurality of linear array scanning images, the integral progression is used for indicating the number of the linear array scanning images, the integral progression has a linear amplification effect on the gray value of the image, and the amplification effect is expressed as follows:
DNout2=I·DNin,
wherein I is the number of integration stages, DNout2Is the output gray value under the effect of the number of integration steps.
Under the combined influence of the integral series and the gain series, the corresponding output gray value DNoutComprises the following steps:
DNout=IuGDNin
=IuG(k0DNr+b),
=IuGk0DNr+IuGb
wherein DNrRepresenting the input grey value, k, without noise0Is multiplicative noise and b is additive noise.
Conversion function f (DN)out) For outputting grey values DNoutConversion to IuGDNrThe expression is as follows,
the true response of the probe element is approximately linear, and the better the quality of the sensor, the better the linearity of the true response. The yaw image data excludes the influence of the terrain and the phase, and the coefficients (especially the intermediate brightness range) generated by the yaw image data are very close to the real response and show an almost straight line shape. Therefore, the current shift coefficient y generated using the yaw image datacur(or reference gear coefficients), and linearly extrapolating coefficients of other gears. The procedure is the same as that in step S4, and in S4, a straight line is obtained by least square fitting using a coefficient corresponding to a gray value within a certain range around the center of the reference shift intermediate gray value. The relative radiation correction coefficient is used for characterizing the conversion function, and for a certain gray value x, the linear extrapolation formula of the corresponding relative radiation correction coefficient y is as follows:
y=Kx+B,
where K is the gain of the linear versus radiometric correction factor and B is the offset of the linear versus radiometric correction factor according to the transfer function f (DN)out) Obtaining: k is 1/K0,B=-(IuGb)/k0I.e. the number of integration stages and the number of gain stages do not affect the gain K relative to the radiation coefficient but only the bias B thereof. The integral level corresponding to the reference gear coefficient of the camera is assumed to be IsThe number of gain stages corresponding to the reference gear coefficient of the camera is GsThe integral series corresponding to the extrapolation gear coefficient of the camera is IpThe number of gain stages corresponding to the extrapolated gear coefficients of the camera is GpThen the extrapolated gear coefficient y of the camera0Expressed as:
gain stage number G for current gear coefficientcurIntegral series I of current gear coefficientcurAnd the gain stage number G of the target gear coefficienttgtIntegral number of stages I of target gear coefficienttgtIts corresponding extrapolated relative radiation correction coefficient ytgtParameter A oftgtAnd BtgtRespectively as follows:
when a gray value k is further obtained, the relative radiation correction coefficient y of the gear is extrapolatedtgt(k) Comprises the following steps:
thereby obtaining the relative radiometric correction factor of the target gear of the camera on the satellite.
The above description is only an example of the present application and is not intended to limit the present application. Various modifications and changes may occur to those skilled in the art. Any modification, equivalent replacement, improvement, etc. made within the spirit and principle of the present application should be included in the scope of the claims of the present application.
Claims (5)
1. A method for automatic generation and extrapolation of relative radiation correction coefficients for optical images, comprising the steps of:
s1, acquiring optical images by using the camera on the satellite, analyzing auxiliary data in yaw calibration data downloaded by the satellite, and obtaining the current gear gain G of the camera on the satellitecurAnd the current integration order IcurAnd setting a target gear gain G of the camera on the satellitetgtAnd target integration order Itgt;
S2, performing oblique line correction on the yaw image data, and cutting two triangles of the upper left side and the lower right side of the yaw image;
s3, calculating a gray level histogram of each probe element, calculating a comprehensive histogram of all probe elements, matching the histograms, and obtaining a relative radiation correction coefficient of the current gear of the camera;
s4, performing linear fitting on the relative radiation correction coefficient obtained in the step S3 to obtain a linear radiation correction coefficient;
s5, calculating a relative radiation correction coefficient of a target gear of a camera on the satellite according to a linear extrapolation formula;
step S4 obtains the gain level G of the current gear of the camera on the satellitecurIntegral order IcurLinear relative radiation correction coefficient y of time correspondencecurFor the linear relative radiation correction coefficient ycurLinear extrapolation is carried out to obtain the gain stage G of the target geartgtIntegral order ItgtRelative radiation correction coefficient y of timetgt;
In step S4, the relative radiation correction coefficient T of the jth probe element at the pixel gray-level value k is determinedj(k) The corresponding gray value k satisfies k ∈ [0, DNMax]DNMax is the upper limit of the gray value, and when linear fitting is carried out, the range [0, DNMax ] is required to be selected]Taking the middle gray value as the center, making start as the lower limit of the linear fitting value, and end as the upper limit of the linear fitting value, and the calculation formula is as follows:
wherein int () is a rounding down operation;
selecting k from [ start, end ∈ ]]Taking the gray value k as an independent variable and taking the corresponding relative radiation correction coefficient T as the corresponding relative radiation correction coefficientj(k) Taking the parameters as dependent variables, and calculating a linear fitting parameter A of the jth probe element according to a least square fitting formulajAnd Bj:
Obtaining a linear relative radiation correction coefficient y by linear fittingjComprises the following steps:
wherein, yj(k) And the linear relative radiation correction coefficient of the jth probe element at the gray value k is shown.
2. The method for automatic generation and extrapolation of relative radiance correction factor for optical images of claim 1,
the camera CCD sensor adopts a TDI CCD sensor which is an array formed by a plurality of linear array optical sensors.
3. The method for automatic generation and extrapolation of relative radiance correction factor for optical images of claim 1,
step S2, after the yaw angle of the satellite is rotated by 90 °, the camera on the satellite is used to capture yaw image data, and simultaneously the drift angle caused by the rotation of the earth is corrected, so that the direction of the CCD sensor linear array of the camera on the satellite is parallel to the sweeping direction of the satellite motion orbit, the camera captures along the sweeping direction of the satellite motion orbit, and thereby yaw image data is obtained, the yaw image data is N pixels in the column direction, corresponding to N detecting elements of the camera CCD sensor, the yaw image data is M pixels in the row direction, corresponding to M rows of the camera CCD sensor, where M is greater than N, the pixels in the yaw image are numbered from 1 in the sequence from top to bottom and from left to right, if the row number (i, j) of a certain pixel of the yaw image satisfies i + j < N, the pixel is not counted into the actual statistical region of the yaw image data, namely, cutting off the left upper triangle of the yaw image; if the current pixel row number (i, j) of the yaw image meets the condition that i + j is larger than M, the pixel is not counted in the actual statistical area of the yaw image data, namely, the lower right triangle of the yaw image is cut off, so as to obtain the actual statistical area of the yaw image data, and the oblique line correction process is expressed by a formula as follows:
where Stat (Pixel (i, j)) represents a statistical identification of the Pixel at the ith row and the jth column in the yaw image, False represents that the Pixel is not counted in the actual statistical area of the yaw image data, True represents that the Pixel is counted in the actual statistical area of the yaw image data, and Pixel (i, j) represents the Pixel at the ith row and the jth column in the yaw image.
4. The method for automatic generation and extrapolation of relative radiance correction factor for optical images of claim 1,
the step S3 specifically includes, for the probe element in the jth row of the yaw image, calculating a gray histogram P of the image obtained by the single probe elementjAnd the cumulative probability density function Sj,1≤j≤N;
For the gray histogram PjWhen the gray value is k, the corresponding gray histogram Pj(k) The calculation formula of (2) is as follows:
Pj(k)=mj(k)/Mj,
in the formula, mj(k) Obtaining the number of pixels in the image with a gray value equal to k, M, for the jth probejObtaining the total pixel number of the image for the jth detecting element, and then obtaining the gray level histogram PjWhen the gray value of a pixel is k, the corresponding cumulative probability density function Sj(k) Comprises the following steps:
wherein l is the gray value of the pixel; merging the gray level histograms of all the probe elements to obtain a comprehensive histogram P of all the probe elements; for the synthetic histogram P, when the gray value is k, the calculation formula of the corresponding synthetic histogram P (k) is:
for the comprehensive histogram P of all the probe elements, when the gray value of a pixel is k, the corresponding cumulative probability density function v (k) is:
in the histogram matching process, the comprehensive histograms of all the probe elements are expected histograms, and a lookup table is established according to the expected histograms, so that the probability density function of the comprehensive histogram of each probe element after matching processing is the same as the probability density function of the expected histogram;
according to the gray histogram P of the jth probe elementjCorresponding cumulative probability density function SjCalculating the relative radiation correction coefficient of the jth probe element according to the cumulative probability density function V corresponding to the expected histogram, and recording the relative radiation correction coefficient of the jth probe element in the gray level lookup table TjIn the middle, j is more than or equal to 1 and less than or equal to N; t isjA gray level lookup table corresponding to the jth probe element is represented;
for the gray histogram PjCumulative probability density function S with k gray value of middle pixelj(k) Starting from the gray value 0, the gray value l satisfying the following condition is searched for gray value by gray value in the cumulative probability density function V corresponding to the desired histogram:
V(l)≤Sj(k)≤V(l+1),
after finding the gray value l, at | V (l) -Sj(k)|-|V(l+1)-Sj(k) When | is less than or equal to 0, the gray level lookup table TjRelative radiation correction factor T of j-th probe element recorded at pixel gray value kj(k) Comprises the following steps:
Tj(k)=l,
at | V (l) -Sj(k)|-|V(l+1)-Sj(k) When | is greater than 0, the gray level lookup table TjRelative radiation correction factor T recorded at pixel grey value kj(k) Comprises the following steps:
Tj(k)=l+1,
the above-described operation procedure is performed for each probe element.
5. The method for automatic generation and extrapolation of relative radiance correction factor for optical images of claim 1,
in step S5, the gain stage has an exponential amplification effect on the gray level value of the image, and the amplification effect is expressed as:
DNout1=uG·DNin,
wherein DNinIs to input a gray value, DNout1Is the output gray value under the action of gain stage, u is a gain basic parameter determined by the camera, G is the gain stage;
the yaw image shot by the camera is formed by overlapping a plurality of linear array scanning images, the integral progression is used for indicating the number of the linear array scanning images, the integral progression has a linear amplification effect on the gray value of the image, and the amplification effect is expressed as follows:
DNout2=I·DNin,
wherein I is the number of integration stages, DNout2Is the output gray value under the action of the integral series;
under the combined influence of the integral series and the gain series, the corresponding output gray value DNoutComprises the following steps:
wherein DNrRepresenting the input grey value, k, without noise0Is multiplicative noise, b is additive noise;
conversion function f (DN)out) For outputting grey values DNoutConversion to IuGDNrThe expression is as follows,
the relative radiation correction coefficient is used for characterizing the conversion function, and for a certain gray value x, the linear extrapolation formula of the corresponding relative radiation correction coefficient y is as follows:
y=Kx+B,
where K is the gain of the linear versus radiometric correction factor and B is the offset of the linear versus radiometric correction factor according to the transfer function f (DN)out) Obtaining: k is 1/K0,B=-(IuGb)/k0(ii) a The integral level corresponding to the reference gear coefficient of the camera is assumed to be IsThe number of gain stages corresponding to the reference gear coefficient of the camera is GsThe integral series corresponding to the extrapolation gear coefficient of the camera is IpThe number of gain stages corresponding to the extrapolated gear coefficients of the camera is GpThen the extrapolated gear coefficient y of the camera0Expressed as:
gain stage number G for current gear coefficientcurIntegral series I of current gear coefficientcurAnd the gain stage number G of the target gear coefficienttgtIntegral number of stages I of target gear coefficienttgtIts corresponding extrapolated relative radiation correction coefficient ytgtParameter A oftgtAnd BtgtRespectively as follows:
when a gray value k is further obtained, the relative radiation correction coefficient y of the gear is extrapolatedtgt(k) Comprises the following steps:
thereby obtaining the relative radiometric correction factor of the target gear of the camera on the satellite.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202110877407.2A CN113570523B (en) | 2021-07-31 | 2021-07-31 | Method for automatic generation and extrapolation of relative radiation correction coefficients for optical images |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202110877407.2A CN113570523B (en) | 2021-07-31 | 2021-07-31 | Method for automatic generation and extrapolation of relative radiation correction coefficients for optical images |
Publications (2)
Publication Number | Publication Date |
---|---|
CN113570523A CN113570523A (en) | 2021-10-29 |
CN113570523B true CN113570523B (en) | 2022-01-28 |
Family
ID=78169726
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202110877407.2A Active CN113570523B (en) | 2021-07-31 | 2021-07-31 | Method for automatic generation and extrapolation of relative radiation correction coefficients for optical images |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN113570523B (en) |
Families Citing this family (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN118134820B (en) * | 2024-05-08 | 2024-07-30 | 中国科学院空天信息创新研究院 | Self-adaptive optical remote sensing image relative radiation correction method |
Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
WO2018055546A1 (en) * | 2016-09-23 | 2018-03-29 | Stellenbosch University | Generating radiometrically corrected surface images |
CN109712089A (en) * | 2018-12-14 | 2019-05-03 | 航天恒星科技有限公司 | Method suitable for the infrared shortwave load relative detector calibration of export-oriented remote sensing satellite |
CN112184570A (en) * | 2020-09-10 | 2021-01-05 | 中国人民解放军61646部队 | Method for detecting response state change of probe element based on relative radiation correction coefficient |
CN112529807A (en) * | 2020-12-15 | 2021-03-19 | 北京道达天际科技有限公司 | Relative radiation correction method and device for satellite image |
-
2021
- 2021-07-31 CN CN202110877407.2A patent/CN113570523B/en active Active
Patent Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
WO2018055546A1 (en) * | 2016-09-23 | 2018-03-29 | Stellenbosch University | Generating radiometrically corrected surface images |
CN109712089A (en) * | 2018-12-14 | 2019-05-03 | 航天恒星科技有限公司 | Method suitable for the infrared shortwave load relative detector calibration of export-oriented remote sensing satellite |
CN112184570A (en) * | 2020-09-10 | 2021-01-05 | 中国人民解放军61646部队 | Method for detecting response state change of probe element based on relative radiation correction coefficient |
CN112529807A (en) * | 2020-12-15 | 2021-03-19 | 北京道达天际科技有限公司 | Relative radiation correction method and device for satellite image |
Non-Patent Citations (2)
Title |
---|
An optical test bench for the precision characterization of absolute quantum efficiency for the TESS CCD detectors;A. Krishnamurthy,et al.;《Journal of Instrumentation 》;20170531;全文 * |
空间大面阵凝视相机在轨辐射定标方法研究;王誉都;《中国知网 博士电子期刊》;20200315;全文 * |
Also Published As
Publication number | Publication date |
---|---|
CN113570523A (en) | 2021-10-29 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
WO2023029373A1 (en) | High-precision farmland vegetation information extraction method | |
CN110120077B (en) | Area array camera in-orbit relative radiation calibration method based on satellite attitude adjustment | |
CN102324098B (en) | Relative radiation calibration method in combination with laboratory calibration and even landscape statistics | |
CN110009688A (en) | A kind of infrared remote sensing image relative radiometric calibration method, system and remote sensing platform | |
CN103438900A (en) | Three-line-array camera image collaborative absolute radiometric calibration and compensation method | |
CN114708204B (en) | Push-broom imaging optimization processing method and system | |
CN116612080B (en) | Variation detection method based on hyperspectral image spectrum curve | |
CN113570523B (en) | Method for automatic generation and extrapolation of relative radiation correction coefficients for optical images | |
CN101635782A (en) | Method and device for obtaining image based on dynamic time delay integral | |
CN105139393A (en) | Method for calibrating intrinsic parameters of linear array camera | |
CN105092043B (en) | A kind of asymmetric correction method of the change time of integration based on scene | |
CN104820970A (en) | Infrared image relative radiation correction method based on on-orbit classified statistic | |
CN108917722B (en) | Vegetation coverage degree calculation method and device | |
CN110738693A (en) | ground-based imaging radar multi-angle image registration method | |
CN116735008B (en) | Calibration method and device for infrared cross radiation, electronic equipment and medium | |
CN115988334B (en) | Self-correcting digital camera mobile remote sensing system and method | |
CN114966733B (en) | Beef cattle three-dimensional depth image acquisition system based on laser array and monocular camera | |
CN117475122A (en) | Linear array camera image recognition method adaptive to different material speeds | |
CN114240801B (en) | Non-uniform correction method for remote sensing image | |
CN111089607B (en) | Automatic calibration method for detection capability of telescope system | |
CN113608246B (en) | Optical satellite TDICCD integration series and gain setting method and imaging device | |
CN115393180A (en) | Infrared image splicing method | |
CN111611931B (en) | Response attenuation analysis method and device for sensing instrument and terminal | |
Feng et al. | Calculating the leaf-area based on non-loss correction algorithm | |
CN114858186B (en) | On-satellite geometric calibration method for linear array camera under fixed star observation mode |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |