[go: up one dir, main page]
More Web Proxy on the site http://driver.im/

CN113570523B - Method for automatic generation and extrapolation of relative radiation correction coefficients for optical images - Google Patents

Method for automatic generation and extrapolation of relative radiation correction coefficients for optical images Download PDF

Info

Publication number
CN113570523B
CN113570523B CN202110877407.2A CN202110877407A CN113570523B CN 113570523 B CN113570523 B CN 113570523B CN 202110877407 A CN202110877407 A CN 202110877407A CN 113570523 B CN113570523 B CN 113570523B
Authority
CN
China
Prior art keywords
camera
gray value
yaw
radiation correction
correction coefficient
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202110877407.2A
Other languages
Chinese (zh)
Other versions
CN113570523A (en
Inventor
吕守业
王艳
王密
赵薇薇
陈儒
王红钢
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
No61646 Unit Of Pla
Original Assignee
No61646 Unit Of Pla
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by No61646 Unit Of Pla filed Critical No61646 Unit Of Pla
Priority to CN202110877407.2A priority Critical patent/CN113570523B/en
Publication of CN113570523A publication Critical patent/CN113570523A/en
Application granted granted Critical
Publication of CN113570523B publication Critical patent/CN113570523B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/90Dynamic range modification of images or parts thereof
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/40Image enhancement or restoration using histogram techniques
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/90Determination of colour characteristics
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/97Determining parameters from multiple pictures
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10004Still image; Photographic image

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Image Processing (AREA)

Abstract

The invention discloses a relative radiation correction coefficient automatic generation and extrapolation method for an optical image, which comprises the following steps: acquiring an optical image by using a camera on the satellite, analyzing auxiliary data in yaw calibration data downloaded by the satellite, and obtaining the current gear gain G of the camera on the satellitecurAnd the current integration order IcurAnd setting a target gear gain G of the camera on the satellitetgtAnd target integration order Itgt(ii) a Performing oblique line correction on the yaw image data; calculating a gray level histogram of each probe element and a comprehensive histogram of all probe elements, and matching the histograms to obtain a relative radiation correction coefficient of the current gear of the camera; performing linear fitting on the relative radiation correction coefficient to obtain a linear radiation correction coefficient; and calculating the relative radiation correction coefficient of the target gear of the camera on the satellite according to a linear extrapolation formula. The method does not depend on manual image selection, can effectively reduce calibration workload, has higher accuracy of the generated relative radiometric calibration coefficient, and improves the quality of the calibration coefficient.

Description

Method for automatic generation and extrapolation of relative radiation correction coefficients for optical images
Technical Field
The invention belongs to the field of image processing, and relates to a relative radiation correction coefficient automatic generation and extrapolation method for an optical image.
Background
Image relative radiation correction is a fundamental image data processing work, and is an extremely important way for improving the image quality. Compared with the scaling method, the statistical method does not need special training data, and directly extracts a correction coefficient from the image through analysis and statistics, thereby correcting the image. However, the conventional statistical method has many disadvantages: the selection of the image depends on artificial selection, so that the image has strong randomness, and the accuracy of the generated relative radiometric calibration coefficient is not high; the statistical work of a large number of images consumes a large amount of time; because a large number of images are needed, the images need to be accumulated firstly, the correction effect cannot be ensured during the image accumulation, and the timeliness is very low, so that the requirements of high-frequency and fine relative radiation calibration of the existing high-resolution large-view-field optical image cannot be met.
With the development of agility of a camera platform, a yaw imaging mode provides important guarantee for high-precision relative radiation correction. For the yaw imaging mode, because each probe element images the same ground object in a short time, the terrain, the illumination condition, the underlying surface, the atmospheric scattering and the like do not form influence factors, and the gray distribution difference of each probe element can be simply considered to be caused by the inconsistent response of the device.
In practical application, the solar incident angle of the sensor during shooting can change along with latitude, time and the like; meanwhile, the landform and the landform of the shooting area have global differences. In order to make the image in the proper gray scale range, the number of integration steps and the number of gain steps are set through a preset lookup table according to the shooting time and the shooting position. Different combinations of integration stages and gain stages form different gears. The relative radiation correction coefficients between different gears have a correlation, but there is also a certain difference. Too many gears result in a large increase in the amount of work relative to radiometric calibration, but if the amount of data for some gears is too small, the quality of the calibration coefficients will be poor. Therefore, the method for automatically generating and extrapolating the relative radiation correction coefficient is provided by researching the relative radiation coefficient conversion among different gears, and has important significance for reducing the calibration workload and improving the quality of the calibration coefficient.
Disclosure of Invention
Aiming at the problem of timely updating of relative radiation correction coefficients of multiple gears of an optical load, the relative radiation correction coefficients of a few gears are utilized, and an effective and timely relative radiation correction coefficient extrapolation generation method is disclosed.
The invention discloses a relative radiation correction coefficient automatic generation and extrapolation method for an optical image, which comprises the following steps:
s1, acquiring optical images by using the camera on the satellite, analyzing auxiliary data in yaw calibration data downloaded by the satellite, and obtaining the current gear gain G of the camera on the satellitecurAnd the current integration order IcurAnd setting a target gear gain G of the camera on the satellitetgtAnd target integration order Itgt
S2, the yaw image data is corrected to have a diagonal line, and two triangles on the upper left and lower right sides of the yaw image are cut out. After the yaw angle of the satellite is rotated by 90 degrees, shooting by using a camera on the satellite to obtain yaw image data, and correcting a drift angle caused by earth rotation, so that the direction of a CCD sensor linear array of the camera on the satellite is parallel to the push-broom direction of a satellite motion orbit, shooting by using the camera along the push-broom direction of the satellite motion orbit to obtain yaw image data, wherein the yaw image data are N pixels in the column direction, correspond to N probes of the CCD sensor of the camera, the yaw image data are M pixels in the row direction, correspond to M rows of push-broom imaging of the CCD sensor of the camera, M is more than N, the pixels in the yaw image are numbered from 1 in the sequence from top to bottom and from left to right, if the row number (i, j) of a certain pixel of the yaw image meets the condition that i + j is less than N, the pixel is not counted into the actual statistical area of the yaw image data, namely, cutting off the left upper triangle of the yaw image; if the current pixel row number (i, j) of the yaw image meets the condition that i + j is larger than M, the pixel is not counted in the actual statistical area of the yaw image data, namely, the lower right triangle of the yaw image is cut off, so as to obtain the actual statistical area of the yaw image data, and the oblique line correction process is expressed by a formula as follows:
Figure BDA0003190824330000021
where Stat (Pixel (i, j)) represents a statistical identification of the Pixel at the ith row and the jth column in the yaw image, False represents that the Pixel is not counted in the actual statistical area of the yaw image data, True represents that the Pixel is counted in the actual statistical area of the yaw image data, and Pixel (i, j) represents the Pixel at the ith row and the jth column in the yaw image.
S3, calculating a gray level histogram of each probe element, calculating a comprehensive histogram of all probe elements, matching the histograms, and obtaining a relative radiation correction coefficient of the current gear of the camera;
the step S3 specifically includes, for the probe element in the jth row of the yaw image, calculating a gray histogram P of the image obtained by the single probe elementjAnd the cumulative probability density function Sj,1≤j≤N;
For the gray histogram PjWhen the gray value is k, the corresponding gray histogram Pj(k) The calculation formula of (2) is as follows:
Pj(k)=mj(k)/Mj
in the formula, mj(k) Obtaining the number of pixels in the image with a gray value equal to k, M, for the jth probejObtaining the total pixel number of the image for the jth detecting element, and then obtaining the gray level histogram PjWhen the gray value of a pixel is k, the corresponding cumulative probability density function Sj(k) Comprises the following steps:
Figure BDA0003190824330000031
wherein l is the gray scale value of the pixel. And combining the gray level histograms of all the probe elements to obtain a comprehensive histogram P of all the probe elements. For the synthetic histogram P, when the gray value is k, the calculation formula of the corresponding synthetic histogram P (k) is:
Figure BDA0003190824330000032
for the comprehensive histogram P of all the probe elements, when the gray value of a pixel is k, the corresponding cumulative probability density function v (k) is:
Figure BDA0003190824330000033
in the histogram matching process, the comprehensive histograms of all the probe elements are expected histograms, and a lookup table is established according to the expected histograms, so that the probability density function of the comprehensive histogram of each probe element after matching is the same as the probability density function of the expected histogram.
According to the gray histogram P of the jth probe elementjCorresponding cumulative probability density function SjCalculating the relative radiation correction coefficient of the jth probe element according to the cumulative probability density function V corresponding to the expected histogram, and recording the relative radiation correction coefficient of the jth probe element in the gray level lookup table TjIn the formula, j is more than or equal to 1 and less than or equal to N. T isjAnd representing the gray level lookup table corresponding to the jth probe element.
For the gray histogram PjCumulative probability density function S with k gray value of middle pixelj(k) Starting from the gray value 0, the gray value l satisfying the following condition is searched for gray value by gray value in the cumulative probability density function V corresponding to the desired histogram:
V(l)≤Sj(k)≤V(l+1),
after finding the gray value l, at | V (l) -Sj(k)|-|V(l+1)-Sj(k) When | is less than or equal to 0, the gray level lookup table TjRelative radiation correction factor T of j-th probe element recorded at pixel gray value kj(k) Comprises the following steps:
Tj(k)=l,
at | V (l) -Sj(k)|-|V(l+1)-Sj(k) When | is greater than 0, the gray level lookup table TjRelative radiation correction factor T recorded at pixel grey value kj(k) Comprises the following steps:
Tj(k)=l+1,
the above-described operation procedure is performed for each probe element.
S4, performing linear fitting on the relative radiation correction coefficient obtained in the step S3 to obtain a linear radiation correction coefficient;
relative radiation correction coefficient T for j-th probe element at pixel gray value kj(k) The corresponding gray value k satisfies k ∈ [0, DNMax]DNMax is the upper limit of the gray value, and when linear fitting is carried out, the range [0, DNMax ] is required to be selected]Taking the middle gray value as the center, making start as the lower limit of the linear fitting value, and end as the upper limit of the linear fitting value, and the calculation formula is as follows:
Figure BDA0003190824330000051
where int () is a floor operation.
Selecting k from [ start, end ∈ ]]Taking the gray value k as an independent variable and taking the corresponding relative radiation correction coefficient T as the corresponding relative radiation correction coefficientj(k) Taking the parameters as dependent variables, and calculating a linear fitting parameter A of the jth probe element according to a least square fitting formulajAnd Bj
Figure BDA0003190824330000052
Obtaining a linear relative radiation correction coefficient y by linear fittingjComprises the following steps:
Figure BDA0003190824330000053
wherein, yj(k) And the linear relative radiation correction coefficient of the jth probe element at the gray value k is shown.
And S5, calculating the relative radiation correction coefficient of the target gear of the camera on the satellite according to a linear extrapolation formula.
Step S4 obtains the gain level G of the current gear of the camera on the satellitecurIntegral order IcurLinear relative radiation correction coefficient y of time correspondencecurFor the linear relative radiation correction coefficient ycurLinear extrapolation is carried out to obtain the gain stage G of the target geartgtIntegral order ItgtRelative radiation correction coefficient y of timetgt
The gain stage has an exponential amplification on the gray-scale values of the image, which is expressed as:
DNout1=uG·DNin
wherein DNinIs to input a gray value, DNout1Is the output gray scale value under the action of the gain stage, u is a basic gain parameter determined by the camera, and G is the gain stage.
The camera CCD sensor adopts a TDI CCD sensor which is an array formed by a plurality of linear array optical sensors. The yaw image shot by the camera is formed by overlapping a plurality of linear array scanning images, the integral progression is used for indicating the number of the linear array scanning images, the integral progression has a linear amplification effect on the gray value of the image, and the amplification effect is expressed as follows:
DNout2=I·DNin
wherein I is the number of integration stages, DNout2Is the output gray value under the effect of the number of integration steps.
Under the combined influence of the integral series and the gain series, the corresponding output gray value DNoutComprises the following steps:
DNout=IuGDNin
=IuG(k0DNr+b),
=IuGk0DNr+IuGb
wherein DNrRepresenting the input grey value, k, without noise0Is multiplicative noise and b is additive noise.
Conversion function f (DN)out) For outputting grey values DNoutConversion to IuGDNrThe expression is as follows,
Figure BDA0003190824330000061
the relative radiation correction coefficient is used for characterizing the conversion function, and for a certain gray value x, the linear extrapolation formula of the corresponding relative radiation correction coefficient y is as follows:
y=Kx+B,
where K is the gain of the linear versus radiometric correction factor and B is the offset of the linear versus radiometric correction factor according to the transfer function f (DN)out) Obtaining: k is 1/K0,B=-(IuGb)/k0. The integral level corresponding to the reference gear coefficient of the camera is assumed to be IsThe number of gain stages corresponding to the reference gear coefficient of the camera is GsThe integral series corresponding to the extrapolation gear coefficient of the camera is IpThe number of gain stages corresponding to the extrapolated gear coefficients of the camera is GpThen the extrapolated gear coefficient y of the camera0Expressed as:
Figure BDA0003190824330000071
gain stage number G for current gear coefficientcurIntegral series I of current gear coefficientcurAnd the gain stage number G of the target gear coefficienttgtIntegral number of stages I of target gear coefficienttgtIts corresponding extrapolated relative radiation correction coefficient ytgtParameter A oftgtAnd BtgtRespectively as follows:
Figure BDA0003190824330000072
when a gray value k is further obtained, the relative radiation correction coefficient y of the gear is extrapolatedtgt(k) Comprises the following steps:
Figure BDA0003190824330000073
thereby obtaining the relative radiometric correction factor of the target gear of the camera on the satellite.
The invention has the beneficial effects that:
compared with the traditional on-orbit statistical relative radiation correction method, the method does not depend on manual image selection, has universality, and has higher accuracy of the generated relative radiation calibration coefficient; the method does not need to count a large number of images, and can be used only by counting a small number of yaw image data; because a large amount of images are not needed to be accumulated, calibration data can be obtained by one-time yaw photographing, the timeliness is very high, and the high-frequency and fine relative radiation calibration requirements of the existing high-resolution large-view-field optical image can be met; by using the relative radiation correction coefficient of a single gear obtained by one-time yaw calibration, the relative radiation correction coefficients of other gears can be obtained by extrapolation, the calibration workload is effectively reduced, and the quality of the calibration coefficient is improved.
Drawings
FIG. 1 is a schematic view of a yaw imaging mode;
FIG. 2 is a schematic flow chart of an embodiment of the present invention;
FIG. 3 is a schematic diagram of skew correction of yaw image data.
Detailed Description
For a better understanding of the present disclosure, an example is given here.
The invention discloses a relative radiation correction coefficient automatic generation and extrapolation method aiming at an optical image, and FIG. 1 is a schematic view of a yaw imaging mode; FIG. 2 is a schematic flow chart of an embodiment of the present invention; FIG. 3 is a schematic diagram of skew correction of yaw image data. In fig. 3, (a) is an original yaw image, and (b) is a yaw image after skew correction.
The method comprises the following specific steps:
s1, acquiring optical images by using the camera on the satellite, analyzing auxiliary data in yaw calibration data downloaded by the satellite, and obtaining the current gear gain G of the camera on the satellitecurAnd the current integration order IcurAnd setting a target gear gain G of the camera on the satellitetgtAnd target integration order Itgt
S2, performing a skew correction on the yaw image data (or referred to as yaw strip data).
Since each of the line CCD detectors does not simultaneously image the same feature but sequentially passes through the same feature, the yaw image data shows an oblique line of about 45 degrees on the image, and the oblique line indicates the same feature, as shown in fig. 3 (a). In the initial and final stages of shooting, the detector shoots different ground objects, the data of the yaw image is corrected in a diagonal manner, and two triangles on the upper side and the lower side of the yaw image are cut off. After the yaw angle of the satellite is rotated by 90 degrees, shooting by using a camera on the satellite to obtain yaw image data, and correcting a drift angle caused by earth rotation, so that the direction of a CCD sensor linear array of the camera on the satellite is parallel to the push-broom direction of a satellite motion orbit, shooting by using the camera along the push-broom direction of the satellite motion orbit to obtain yaw image data, wherein the yaw image data are N pixels in the column direction, correspond to N probes of the CCD sensor of the camera, the yaw image data are M pixels in the row direction, correspond to M rows of push-broom imaging of the CCD sensor of the camera, M is more than N, the pixels in the yaw image are numbered from 1 in the sequence from top to bottom and from left to right, if the row number (i, j) of a certain pixel of the yaw image meets the condition that i + j is less than N, the pixel is not counted into the actual statistical area of the yaw image data, namely, cutting off the left upper triangle of the yaw image; if the current pixel row number (i, j) of the yaw image meets the condition that i + j is larger than M, the pixel is not counted in the actual statistical area of the yaw image data, namely, the lower right triangle of the yaw image is cut off, so as to obtain the actual statistical area of the yaw image data, and the oblique line correction process is expressed by a formula as follows:
Figure BDA0003190824330000091
where Stat (Pixel (i, j)) represents a statistical identification of the Pixel at the ith row and the jth column in the yaw image, False represents that the Pixel is not counted in the actual statistical area of the yaw image data, True represents that the Pixel is counted in the actual statistical area of the yaw image data, and Pixel (i, j) represents the Pixel at the ith row and the jth column in the yaw image.
S3, calculating a gray level histogram of each probe element, calculating a comprehensive histogram of all probe elements, matching the histograms, and obtaining a relative radiation correction coefficient of the current gear of the camera;
the step S3 specifically includes, for the jth probe element (or the jth probe element of the sensor) in the jth row of the yaw image, calculating a gray histogram P of the image obtained by the single probe elementjAnd the cumulative probability density function Sj,1≤j≤N;
For the gray histogram PjWhen the gray value is k, the corresponding gray histogram Pj(k) The calculation formula of (2) is as follows:
Pj(k)=mj(k)/Mj
in the formula, mj(k) Obtaining the number of pixels in the image with a gray value equal to k, M, for the jth probejObtaining the total pixel number of the image for the jth detecting element, and then obtaining the gray level histogram PjWhen the gray value of a pixel is k, the corresponding cumulative probability density function Sj(k) Comprises the following steps:
Figure BDA0003190824330000092
wherein l is the gray scale value of the pixel. And combining the gray level histograms of all the probe elements to obtain a comprehensive histogram P of all the probe elements. For the synthetic histogram P, when the gray value is k, the calculation formula of the corresponding synthetic histogram P (k) is:
Figure BDA0003190824330000101
for the comprehensive histogram P of all the probe elements, when the gray value of a pixel is k, the corresponding cumulative probability density function v (k) is:
Figure BDA0003190824330000102
in the histogram matching process, the comprehensive histograms of all the probe elements are expected histograms, and a lookup table is established according to the expected histograms, so that the probability density function of the comprehensive histogram of each probe element after matching is the same as the probability density function of the expected histogram.
According to the gray histogram P of the jth probe elementjCorresponding cumulative probability density function SjCalculating the relative radiation correction coefficient of the jth probe element according to the cumulative probability density function V corresponding to the expected histogram, and recording the relative radiation correction coefficient of the jth probe element in the gray level lookup table TjIn the formula, j is more than or equal to 1 and less than or equal to N. T isjAnd representing the gray level lookup table corresponding to the jth probe element.
For the gray histogram PjCumulative probability density function S with k gray value of middle pixelj(k) Starting from the gray value 0, the gray value l satisfying the following condition is searched for gray value by gray value in the cumulative probability density function V corresponding to the desired histogram:
V(l)≤Sj(k)≤V(l+1),
after finding the gray value l, at | V (l) -Sj(k)|-|V(l+1)-Sj(k) When | is less than or equal to 0, the gray level lookup table TjRelative radiation correction factor T of j-th probe element recorded at pixel gray value kj(k) Comprises the following steps:
Tj(k)=l,
at | V (l) -Sj(k)|-|V(l+1)-Sj(k) When | is greater than 0, the gray level lookup table TjRelative radiation correction factor T recorded at pixel grey value kj(k) Comprises the following steps:
Tj(k)=l+1,
the above-described operation procedure is performed for each probe element.
S4, performing linear fitting on the relative radiation correction coefficient (gray level lookup table) obtained in the step S3 to obtain a linear radiation correction coefficient;
the relative radiance correction factor generated in step S3 generally will not be linear over the entire gray scale range, and for the convenience of subsequent gear extrapolation, linear fitting is required here, and the linear fitting parameter a is calculatedj,Bj
Relative radiation correction coefficient T for j-th probe element at pixel gray value kj(k) The corresponding gray value k satisfies k ∈ [0, DNMax]DNMax is the upper limit of the gray value, and when linear fitting is carried out, the range [0, DNMax ] is required to be selected]Taking the middle gray value as the center, making start as the lower limit of the linear fitting value, and end as the upper limit of the linear fitting value, and the calculation formula is as follows:
Figure BDA0003190824330000111
where int () is a floor operation.
Selecting k from [ start, end ∈ ]]Taking the gray value k as an independent variable and taking the corresponding relative radiation correction coefficient T as the corresponding relative radiation correction coefficientj(k) Taking the parameters as dependent variables, and calculating a linear fitting parameter A of the jth probe element according to a least square fitting formulajAnd Bj
Figure BDA0003190824330000112
Obtaining a linear relative radiation correction coefficient y by linear fittingjComprises the following steps:
Figure BDA0003190824330000121
wherein, yj(k) And the linear relative radiation correction coefficient of the jth probe element at the gray value k is shown.
And S5, calculating the relative radiation correction coefficient of the target gear of the camera on the satellite according to a linear extrapolation formula.
Step S4 obtains the gain level G of the current gear of the camera on the satellitecurIntegral order IcurLinear relative radiation correction coefficient y of time correspondencecurFor the linear relative radiation correction coefficient ycurLinear extrapolation is carried out to obtain the gain stage G of the target geartgtIntegral order ItgtTime relative radiation correctionCoefficient ytgt
The gear is the combination of the number of gain stages and the number of integration stages, and is set to ensure that the image has as large a gray scale range as possible, and to make the gray scale range in the middle position as much as possible. The number of gain stages and the number of integration stages have different effects on the image gradation. The gain stage has an exponential amplification on the gray-scale values of the image, which is expressed as:
DNout1=uG·DNin
wherein DNinIs to input a gray value, DNout1Is the output gray value under the action of the gain series, u is a gain basic parameter determined by the camera, the value of which is 1.2 or 1.25, different sensors have different design values, and G is the gain series.
The camera CCD sensor adopts a TDI CCD sensor which is an array formed by a plurality of linear array optical sensors. The yaw image shot by the camera is formed by overlapping a plurality of linear array scanning images, the integral progression is used for indicating the number of the linear array scanning images, the integral progression has a linear amplification effect on the gray value of the image, and the amplification effect is expressed as follows:
DNout2=I·DNin
wherein I is the number of integration stages, DNout2Is the output gray value under the effect of the number of integration steps.
Under the combined influence of the integral series and the gain series, the corresponding output gray value DNoutComprises the following steps:
DNout=IuGDNin
=IuG(k0DNr+b),
=IuGk0DNr+IuGb
wherein DNrRepresenting the input grey value, k, without noise0Is multiplicative noise and b is additive noise.
Conversion function f (DN)out) For outputting grey values DNoutConversion to IuGDNrThe expression is as follows,
Figure BDA0003190824330000131
the true response of the probe element is approximately linear, and the better the quality of the sensor, the better the linearity of the true response. The yaw image data excludes the influence of the terrain and the phase, and the coefficients (especially the intermediate brightness range) generated by the yaw image data are very close to the real response and show an almost straight line shape. Therefore, the current shift coefficient y generated using the yaw image datacur(or reference gear coefficients), and linearly extrapolating coefficients of other gears. The procedure is the same as that in step S4, and in S4, a straight line is obtained by least square fitting using a coefficient corresponding to a gray value within a certain range around the center of the reference shift intermediate gray value. The relative radiation correction coefficient is used for characterizing the conversion function, and for a certain gray value x, the linear extrapolation formula of the corresponding relative radiation correction coefficient y is as follows:
y=Kx+B,
where K is the gain of the linear versus radiometric correction factor and B is the offset of the linear versus radiometric correction factor according to the transfer function f (DN)out) Obtaining: k is 1/K0,B=-(IuGb)/k0I.e. the number of integration stages and the number of gain stages do not affect the gain K relative to the radiation coefficient but only the bias B thereof. The integral level corresponding to the reference gear coefficient of the camera is assumed to be IsThe number of gain stages corresponding to the reference gear coefficient of the camera is GsThe integral series corresponding to the extrapolation gear coefficient of the camera is IpThe number of gain stages corresponding to the extrapolated gear coefficients of the camera is GpThen the extrapolated gear coefficient y of the camera0Expressed as:
Figure BDA0003190824330000141
gain stage number G for current gear coefficientcurIntegral series I of current gear coefficientcurAnd the gain stage number G of the target gear coefficienttgtIntegral number of stages I of target gear coefficienttgtIts corresponding extrapolated relative radiation correction coefficient ytgtParameter A oftgtAnd BtgtRespectively as follows:
Figure BDA0003190824330000142
when a gray value k is further obtained, the relative radiation correction coefficient y of the gear is extrapolatedtgt(k) Comprises the following steps:
Figure BDA0003190824330000143
thereby obtaining the relative radiometric correction factor of the target gear of the camera on the satellite.
The above description is only an example of the present application and is not intended to limit the present application. Various modifications and changes may occur to those skilled in the art. Any modification, equivalent replacement, improvement, etc. made within the spirit and principle of the present application should be included in the scope of the claims of the present application.

Claims (5)

1. A method for automatic generation and extrapolation of relative radiation correction coefficients for optical images, comprising the steps of:
s1, acquiring optical images by using the camera on the satellite, analyzing auxiliary data in yaw calibration data downloaded by the satellite, and obtaining the current gear gain G of the camera on the satellitecurAnd the current integration order IcurAnd setting a target gear gain G of the camera on the satellitetgtAnd target integration order Itgt
S2, performing oblique line correction on the yaw image data, and cutting two triangles of the upper left side and the lower right side of the yaw image;
s3, calculating a gray level histogram of each probe element, calculating a comprehensive histogram of all probe elements, matching the histograms, and obtaining a relative radiation correction coefficient of the current gear of the camera;
s4, performing linear fitting on the relative radiation correction coefficient obtained in the step S3 to obtain a linear radiation correction coefficient;
s5, calculating a relative radiation correction coefficient of a target gear of a camera on the satellite according to a linear extrapolation formula;
step S4 obtains the gain level G of the current gear of the camera on the satellitecurIntegral order IcurLinear relative radiation correction coefficient y of time correspondencecurFor the linear relative radiation correction coefficient ycurLinear extrapolation is carried out to obtain the gain stage G of the target geartgtIntegral order ItgtRelative radiation correction coefficient y of timetgt
In step S4, the relative radiation correction coefficient T of the jth probe element at the pixel gray-level value k is determinedj(k) The corresponding gray value k satisfies k ∈ [0, DNMax]DNMax is the upper limit of the gray value, and when linear fitting is carried out, the range [0, DNMax ] is required to be selected]Taking the middle gray value as the center, making start as the lower limit of the linear fitting value, and end as the upper limit of the linear fitting value, and the calculation formula is as follows:
Figure FDA0003412953360000011
wherein int () is a rounding down operation;
selecting k from [ start, end ∈ ]]Taking the gray value k as an independent variable and taking the corresponding relative radiation correction coefficient T as the corresponding relative radiation correction coefficientj(k) Taking the parameters as dependent variables, and calculating a linear fitting parameter A of the jth probe element according to a least square fitting formulajAnd Bj
Figure FDA0003412953360000021
Obtaining a linear relative radiation correction coefficient y by linear fittingjComprises the following steps:
Figure FDA0003412953360000022
wherein, yj(k) And the linear relative radiation correction coefficient of the jth probe element at the gray value k is shown.
2. The method for automatic generation and extrapolation of relative radiance correction factor for optical images of claim 1,
the camera CCD sensor adopts a TDI CCD sensor which is an array formed by a plurality of linear array optical sensors.
3. The method for automatic generation and extrapolation of relative radiance correction factor for optical images of claim 1,
step S2, after the yaw angle of the satellite is rotated by 90 °, the camera on the satellite is used to capture yaw image data, and simultaneously the drift angle caused by the rotation of the earth is corrected, so that the direction of the CCD sensor linear array of the camera on the satellite is parallel to the sweeping direction of the satellite motion orbit, the camera captures along the sweeping direction of the satellite motion orbit, and thereby yaw image data is obtained, the yaw image data is N pixels in the column direction, corresponding to N detecting elements of the camera CCD sensor, the yaw image data is M pixels in the row direction, corresponding to M rows of the camera CCD sensor, where M is greater than N, the pixels in the yaw image are numbered from 1 in the sequence from top to bottom and from left to right, if the row number (i, j) of a certain pixel of the yaw image satisfies i + j < N, the pixel is not counted into the actual statistical region of the yaw image data, namely, cutting off the left upper triangle of the yaw image; if the current pixel row number (i, j) of the yaw image meets the condition that i + j is larger than M, the pixel is not counted in the actual statistical area of the yaw image data, namely, the lower right triangle of the yaw image is cut off, so as to obtain the actual statistical area of the yaw image data, and the oblique line correction process is expressed by a formula as follows:
Figure FDA0003412953360000031
where Stat (Pixel (i, j)) represents a statistical identification of the Pixel at the ith row and the jth column in the yaw image, False represents that the Pixel is not counted in the actual statistical area of the yaw image data, True represents that the Pixel is counted in the actual statistical area of the yaw image data, and Pixel (i, j) represents the Pixel at the ith row and the jth column in the yaw image.
4. The method for automatic generation and extrapolation of relative radiance correction factor for optical images of claim 1,
the step S3 specifically includes, for the probe element in the jth row of the yaw image, calculating a gray histogram P of the image obtained by the single probe elementjAnd the cumulative probability density function Sj,1≤j≤N;
For the gray histogram PjWhen the gray value is k, the corresponding gray histogram Pj(k) The calculation formula of (2) is as follows:
Pj(k)=mj(k)/Mj
in the formula, mj(k) Obtaining the number of pixels in the image with a gray value equal to k, M, for the jth probejObtaining the total pixel number of the image for the jth detecting element, and then obtaining the gray level histogram PjWhen the gray value of a pixel is k, the corresponding cumulative probability density function Sj(k) Comprises the following steps:
Figure FDA0003412953360000041
wherein l is the gray value of the pixel; merging the gray level histograms of all the probe elements to obtain a comprehensive histogram P of all the probe elements; for the synthetic histogram P, when the gray value is k, the calculation formula of the corresponding synthetic histogram P (k) is:
Figure FDA0003412953360000042
for the comprehensive histogram P of all the probe elements, when the gray value of a pixel is k, the corresponding cumulative probability density function v (k) is:
Figure FDA0003412953360000043
in the histogram matching process, the comprehensive histograms of all the probe elements are expected histograms, and a lookup table is established according to the expected histograms, so that the probability density function of the comprehensive histogram of each probe element after matching processing is the same as the probability density function of the expected histogram;
according to the gray histogram P of the jth probe elementjCorresponding cumulative probability density function SjCalculating the relative radiation correction coefficient of the jth probe element according to the cumulative probability density function V corresponding to the expected histogram, and recording the relative radiation correction coefficient of the jth probe element in the gray level lookup table TjIn the middle, j is more than or equal to 1 and less than or equal to N; t isjA gray level lookup table corresponding to the jth probe element is represented;
for the gray histogram PjCumulative probability density function S with k gray value of middle pixelj(k) Starting from the gray value 0, the gray value l satisfying the following condition is searched for gray value by gray value in the cumulative probability density function V corresponding to the desired histogram:
V(l)≤Sj(k)≤V(l+1),
after finding the gray value l, at | V (l) -Sj(k)|-|V(l+1)-Sj(k) When | is less than or equal to 0, the gray level lookup table TjRelative radiation correction factor T of j-th probe element recorded at pixel gray value kj(k) Comprises the following steps:
Tj(k)=l,
at | V (l) -Sj(k)|-|V(l+1)-Sj(k) When | is greater than 0, the gray level lookup table TjRelative radiation correction factor T recorded at pixel grey value kj(k) Comprises the following steps:
Tj(k)=l+1,
the above-described operation procedure is performed for each probe element.
5. The method for automatic generation and extrapolation of relative radiance correction factor for optical images of claim 1,
in step S5, the gain stage has an exponential amplification effect on the gray level value of the image, and the amplification effect is expressed as:
DNout1=uG·DNin
wherein DNinIs to input a gray value, DNout1Is the output gray value under the action of gain stage, u is a gain basic parameter determined by the camera, G is the gain stage;
the yaw image shot by the camera is formed by overlapping a plurality of linear array scanning images, the integral progression is used for indicating the number of the linear array scanning images, the integral progression has a linear amplification effect on the gray value of the image, and the amplification effect is expressed as follows:
DNout2=I·DNin
wherein I is the number of integration stages, DNout2Is the output gray value under the action of the integral series;
under the combined influence of the integral series and the gain series, the corresponding output gray value DNoutComprises the following steps:
Figure FDA0003412953360000061
wherein DNrRepresenting the input grey value, k, without noise0Is multiplicative noise, b is additive noise;
conversion function f (DN)out) For outputting grey values DNoutConversion to IuGDNrThe expression is as follows,
Figure FDA0003412953360000062
the relative radiation correction coefficient is used for characterizing the conversion function, and for a certain gray value x, the linear extrapolation formula of the corresponding relative radiation correction coefficient y is as follows:
y=Kx+B,
where K is the gain of the linear versus radiometric correction factor and B is the offset of the linear versus radiometric correction factor according to the transfer function f (DN)out) Obtaining: k is 1/K0,B=-(IuGb)/k0(ii) a The integral level corresponding to the reference gear coefficient of the camera is assumed to be IsThe number of gain stages corresponding to the reference gear coefficient of the camera is GsThe integral series corresponding to the extrapolation gear coefficient of the camera is IpThe number of gain stages corresponding to the extrapolated gear coefficients of the camera is GpThen the extrapolated gear coefficient y of the camera0Expressed as:
Figure FDA0003412953360000063
gain stage number G for current gear coefficientcurIntegral series I of current gear coefficientcurAnd the gain stage number G of the target gear coefficienttgtIntegral number of stages I of target gear coefficienttgtIts corresponding extrapolated relative radiation correction coefficient ytgtParameter A oftgtAnd BtgtRespectively as follows:
Figure FDA0003412953360000064
when a gray value k is further obtained, the relative radiation correction coefficient y of the gear is extrapolatedtgt(k) Comprises the following steps:
Figure FDA0003412953360000071
thereby obtaining the relative radiometric correction factor of the target gear of the camera on the satellite.
CN202110877407.2A 2021-07-31 2021-07-31 Method for automatic generation and extrapolation of relative radiation correction coefficients for optical images Active CN113570523B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110877407.2A CN113570523B (en) 2021-07-31 2021-07-31 Method for automatic generation and extrapolation of relative radiation correction coefficients for optical images

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110877407.2A CN113570523B (en) 2021-07-31 2021-07-31 Method for automatic generation and extrapolation of relative radiation correction coefficients for optical images

Publications (2)

Publication Number Publication Date
CN113570523A CN113570523A (en) 2021-10-29
CN113570523B true CN113570523B (en) 2022-01-28

Family

ID=78169726

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110877407.2A Active CN113570523B (en) 2021-07-31 2021-07-31 Method for automatic generation and extrapolation of relative radiation correction coefficients for optical images

Country Status (1)

Country Link
CN (1) CN113570523B (en)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN118134820B (en) * 2024-05-08 2024-07-30 中国科学院空天信息创新研究院 Self-adaptive optical remote sensing image relative radiation correction method

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2018055546A1 (en) * 2016-09-23 2018-03-29 Stellenbosch University Generating radiometrically corrected surface images
CN109712089A (en) * 2018-12-14 2019-05-03 航天恒星科技有限公司 Method suitable for the infrared shortwave load relative detector calibration of export-oriented remote sensing satellite
CN112184570A (en) * 2020-09-10 2021-01-05 中国人民解放军61646部队 Method for detecting response state change of probe element based on relative radiation correction coefficient
CN112529807A (en) * 2020-12-15 2021-03-19 北京道达天际科技有限公司 Relative radiation correction method and device for satellite image

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2018055546A1 (en) * 2016-09-23 2018-03-29 Stellenbosch University Generating radiometrically corrected surface images
CN109712089A (en) * 2018-12-14 2019-05-03 航天恒星科技有限公司 Method suitable for the infrared shortwave load relative detector calibration of export-oriented remote sensing satellite
CN112184570A (en) * 2020-09-10 2021-01-05 中国人民解放军61646部队 Method for detecting response state change of probe element based on relative radiation correction coefficient
CN112529807A (en) * 2020-12-15 2021-03-19 北京道达天际科技有限公司 Relative radiation correction method and device for satellite image

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
An optical test bench for the precision characterization of absolute quantum efficiency for the TESS CCD detectors;A. Krishnamurthy,et al.;《Journal of Instrumentation 》;20170531;全文 *
空间大面阵凝视相机在轨辐射定标方法研究;王誉都;《中国知网 博士电子期刊》;20200315;全文 *

Also Published As

Publication number Publication date
CN113570523A (en) 2021-10-29

Similar Documents

Publication Publication Date Title
WO2023029373A1 (en) High-precision farmland vegetation information extraction method
CN110120077B (en) Area array camera in-orbit relative radiation calibration method based on satellite attitude adjustment
CN102324098B (en) Relative radiation calibration method in combination with laboratory calibration and even landscape statistics
CN110009688A (en) A kind of infrared remote sensing image relative radiometric calibration method, system and remote sensing platform
CN103438900A (en) Three-line-array camera image collaborative absolute radiometric calibration and compensation method
CN114708204B (en) Push-broom imaging optimization processing method and system
CN116612080B (en) Variation detection method based on hyperspectral image spectrum curve
CN113570523B (en) Method for automatic generation and extrapolation of relative radiation correction coefficients for optical images
CN101635782A (en) Method and device for obtaining image based on dynamic time delay integral
CN105139393A (en) Method for calibrating intrinsic parameters of linear array camera
CN105092043B (en) A kind of asymmetric correction method of the change time of integration based on scene
CN104820970A (en) Infrared image relative radiation correction method based on on-orbit classified statistic
CN108917722B (en) Vegetation coverage degree calculation method and device
CN110738693A (en) ground-based imaging radar multi-angle image registration method
CN116735008B (en) Calibration method and device for infrared cross radiation, electronic equipment and medium
CN115988334B (en) Self-correcting digital camera mobile remote sensing system and method
CN114966733B (en) Beef cattle three-dimensional depth image acquisition system based on laser array and monocular camera
CN117475122A (en) Linear array camera image recognition method adaptive to different material speeds
CN114240801B (en) Non-uniform correction method for remote sensing image
CN111089607B (en) Automatic calibration method for detection capability of telescope system
CN113608246B (en) Optical satellite TDICCD integration series and gain setting method and imaging device
CN115393180A (en) Infrared image splicing method
CN111611931B (en) Response attenuation analysis method and device for sensing instrument and terminal
Feng et al. Calculating the leaf-area based on non-loss correction algorithm
CN114858186B (en) On-satellite geometric calibration method for linear array camera under fixed star observation mode

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant