CN112307901B - SAR and optical image fusion method and system for landslide detection - Google Patents
SAR and optical image fusion method and system for landslide detection Download PDFInfo
- Publication number
- CN112307901B CN112307901B CN202011045558.3A CN202011045558A CN112307901B CN 112307901 B CN112307901 B CN 112307901B CN 202011045558 A CN202011045558 A CN 202011045558A CN 112307901 B CN112307901 B CN 112307901B
- Authority
- CN
- China
- Prior art keywords
- image
- sar
- frequency
- landslide
- fusion
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active
Links
- 230000003287 optical effect Effects 0.000 title claims abstract description 83
- 238000001514 detection method Methods 0.000 title claims abstract description 43
- 238000007500 overflow downdraw method Methods 0.000 title claims abstract description 16
- 230000004927 fusion Effects 0.000 claims abstract description 127
- 230000009466 transformation Effects 0.000 claims abstract description 37
- 238000000034 method Methods 0.000 claims abstract description 30
- 238000007781 pre-processing Methods 0.000 claims abstract description 12
- 238000000638 solvent extraction Methods 0.000 claims abstract description 7
- 238000003384 imaging method Methods 0.000 claims abstract description 6
- 238000000354 decomposition reaction Methods 0.000 claims description 63
- 238000004458 analytical method Methods 0.000 claims description 16
- 230000008569 process Effects 0.000 claims description 13
- 238000004364 calculation method Methods 0.000 claims description 12
- 238000000605 extraction Methods 0.000 claims description 10
- 238000012545 processing Methods 0.000 claims description 10
- 238000006243 chemical reaction Methods 0.000 claims description 8
- 239000003086 colorant Substances 0.000 claims description 7
- 238000001914 filtration Methods 0.000 claims description 7
- 238000005259 measurement Methods 0.000 claims description 7
- 238000011049 filling Methods 0.000 claims description 6
- 238000010606 normalization Methods 0.000 claims description 5
- 238000011426 transformation method Methods 0.000 claims description 4
- 230000003313 weakening effect Effects 0.000 claims description 4
- 238000001228 spectrum Methods 0.000 abstract description 11
- 230000000694 effects Effects 0.000 abstract description 3
- 238000012423 maintenance Methods 0.000 abstract description 2
- 238000004321 preservation Methods 0.000 abstract description 2
- 238000012937 correction Methods 0.000 description 16
- 238000009792 diffusion process Methods 0.000 description 9
- 238000011160 research Methods 0.000 description 8
- 238000004422 calculation algorithm Methods 0.000 description 6
- 238000005516 engineering process Methods 0.000 description 5
- 230000008859 change Effects 0.000 description 3
- 230000003595 spectral effect Effects 0.000 description 3
- 238000011161 development Methods 0.000 description 2
- 231100001267 hazard identification Toxicity 0.000 description 2
- 238000012986 modification Methods 0.000 description 2
- 230000004048 modification Effects 0.000 description 2
- 238000012544 monitoring process Methods 0.000 description 2
- 230000009467 reduction Effects 0.000 description 2
- 230000002238 attenuated effect Effects 0.000 description 1
- 230000009286 beneficial effect Effects 0.000 description 1
- 230000001427 coherent effect Effects 0.000 description 1
- 230000000295 complement effect Effects 0.000 description 1
- 230000001010 compromised effect Effects 0.000 description 1
- 230000007547 defect Effects 0.000 description 1
- 230000008570 general process Effects 0.000 description 1
- 238000010191 image analysis Methods 0.000 description 1
- 230000006872 improvement Effects 0.000 description 1
- 238000013507 mapping Methods 0.000 description 1
- 238000013178 mathematical model Methods 0.000 description 1
- 239000011159 matrix material Substances 0.000 description 1
- 230000007246 mechanism Effects 0.000 description 1
- 238000002156 mixing Methods 0.000 description 1
- 230000002265 prevention Effects 0.000 description 1
- 238000003672 processing method Methods 0.000 description 1
- 230000005855 radiation Effects 0.000 description 1
- 239000000758 substrate Substances 0.000 description 1
- 230000009897 systematic effect Effects 0.000 description 1
- 238000001429 visible spectrum Methods 0.000 description 1
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V20/00—Scenes; Scene-specific elements
- G06V20/10—Terrestrial scenes
- G06V20/13—Satellite images
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/25—Fusion techniques
- G06F18/251—Fusion techniques of input or preprocessed data
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/25—Fusion techniques
- G06F18/253—Fusion techniques of extracted features
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/20—Image preprocessing
- G06V10/30—Noise filtering
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Data Mining & Analysis (AREA)
- Bioinformatics & Cheminformatics (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Evolutionary Biology (AREA)
- Evolutionary Computation (AREA)
- Bioinformatics & Computational Biology (AREA)
- General Engineering & Computer Science (AREA)
- Artificial Intelligence (AREA)
- Life Sciences & Earth Sciences (AREA)
- Multimedia (AREA)
- Astronomy & Astrophysics (AREA)
- Remote Sensing (AREA)
- Pit Excavations, Shoring, Fill Or Stabilisation Of Slopes (AREA)
- Image Analysis (AREA)
Abstract
The invention discloses a SAR and optical image fusion method for landslide detection. The method of the invention comprises the following steps: step 1, preprocessing SAR and an optical image; step 2, performing HIS (high intensity imaging) transformation on the optical image to obtain three components I, H and S; step 3, performing stable wavelet transformation and high-frequency component energy large fusion on the I component of the SAR image and the optical image; step 4, respectively carrying out saliency detection on low-frequency and high-frequency components of the SAR image and gray information of the image, establishing a SAR saliency target detection region guiding function, and partitioning the SAR image; step 5, establishing a salient region fusion rule, and realizing image fusion according to a regional fusion strategy; and 6, identifying and extracting landslide disaster information based on the fusion image. The invention has better adaptability to SAR and optical image fusion facing landslide detection, and adopts related treatment measures on structure maintenance, noise removal and spectrum preservation and obtains excellent effects.
Description
Technical Field
The invention relates to the landslide hazard identification monitoring field and the multi-sensor remote sensing image data fusion field, in particular to a landslide detection-oriented SAR and optical image fusion method and system.
Background
With the further development of remote sensing science and technology, the analysis and research work of applying the remote sensing technology in landslide disasters is also continuous and deep, and the massive acquisition of the remote sensing data of various resolution and various sensors provides sufficient data guarantee for developing landslide disaster identification. The landslide disaster identification research can be performed by applying the remote sensing technology, so that disaster prevention and reduction can be realized, and sufficient disaster information data guarantee can be provided for disaster relief and post-disaster reconstruction, so that life and property loss caused by disasters is minimized.
At present, the research of remote sensing technology for identifying and detecting landslide disasters at home and abroad mostly adopts only a single data source of one of optical remote sensing images or SAR images, and is limited to fusion between heterogeneous optical images or fusion between multi-band multi-polarization SAR images when fusion image development analysis research is used, so that researches relating to landslide detection by using fusion data of SAR and optical images are rare. The optical remote sensing images with various spatial resolutions can provide spectral information and texture, geometric shape and other information of ground features, so that landslide hazard identification accuracy is guaranteed. Due to the characteristics of the SAR in all days and all weather, the SAR is more effective than an optical sensor under various complex environments and weather conditions with poor visibility, and is particularly suitable for application in land feature information acquisition in cloudy areas, rapid disaster condition acquisition in sudden disasters (such as flood, earthquake, landslide and the like), crust deformation monitoring and the like. However, SAR images belong to coherent imaging of oblique projection, and have great differences from visible light images in imaging mechanism, radiation characteristics and geometric characteristics. Therefore, the SAR and optical image fusion is difficult to apply to ground object interpretation and landslide disaster detection, mapping application is difficult, and a large research space still exists. The main problems are:
first, spectral distortion problems can occur during the fusion of the optical image and the SAR image.
Second, how to acquire the high-quality SAR image landslide feature information.
Thirdly, the calculated amount of the fusion technology of the optical image and the SAR image is large, and the improvement of the speed of the fusion algorithm is also a great difficulty in the current research situation.
Disclosure of Invention
The technical problem to be solved by the invention is to provide a SAR and optical image fusion method and system for landslide detection aiming at the defects existing in the prior art.
Therefore, the invention adopts the following technical scheme: a SAR and optical image fusion method facing landslide detection comprises the following steps:
step 1, preprocessing SAR images and optical images;
Step 2, performing HIS (high intensity imaging) transformation on the optical image to obtain three components I, H and S;
step 3, performing stable wavelet transformation and high-frequency component energy large fusion on the I component of the SAR image and the I component of the optical image;
Step4, respectively carrying out landslide characteristic detection on low-frequency and high-frequency components of the SAR image and gray information of the image, establishing a target detection area function of the SAR landslide, and partitioning the SAR image;
Step 5, establishing landslide feature region fusion rules, and realizing image fusion according to a regional fusion strategy;
and 6, identifying and extracting landslide disaster information based on the fusion image.
Further, the specific steps of the high-frequency component energy large fusion in the step 3 are as follows:
step 31, high-frequency component energy calculation:
The input SAR image and the optical image I component are subjected to stable wavelet decomposition to obtain low-frequency and high-frequency information of 4 groups of images: a s,j(x,y)、Av,j (x, y), Wherein: a s,j (x, y) and A v,j (x, y) respectively represent low-frequency information obtained by decomposing the SAR image and the optical image for the jth time,AndRespectively representing high-frequency information obtained by decomposing the SAR image and the optical image in the j th time in the epsilon direction; the directivity of epsilon is expressed by a number, epsilon=1 represents decomposition in the horizontal direction, epsilon=2 represents decomposition in the vertical direction, epsilon=3 represents decomposition in the diagonal direction, and j is the number of decomposition times;
step 32, taking a large fusion method:
Step 321, according to the high frequency information of the two groups of images AndRespectively calculating the high-frequency neighborhood energyAnd
Wherein,Represents the epsilon-direction high-frequency neighborhood energy of the SAR image under the jth decomposition,Representing high-frequency neighborhood energy of the optical image in the epsilon direction under the j-th decomposition;
Step 322, through analysis of the energy of the neighborhood, selecting energy variation salient wavelet coefficients to form new high-frequency wavelet coefficients to participate in the reconstruction of the back end, wherein the energy measurement large processing strategy is as follows:
Wherein, High frequency information which is the fusion result.
Further, the function of the SAR landslide target detection area established in the step 4 is as follows:
S′(x,y)=[(SE(x,y)+SA(x,y)+SN(x,y))]β
Wherein beta is an adjustment parameter used for highlighting a landslide characteristic target area;
s' (x, y) is the intermediate calculated quantity of the SAR image landslide characteristic function,
S (x, y) is the landslide feature function of the final SAR image,
S A (x, y) is a low frequency salient feature,
S E (x, y) is a high frequency salient feature,
S N (x, y) is a landslide feature function of feature darkness in the SAR image.
Further, the process of establishing the SAR landslide target detection area function in the step 4 is as follows:
step 41, target analysis of SAR images:
The size of the wavelet high-frequency coefficient in the SAR image represents the part with larger fluctuation in the image, and the parts are landslide characteristic significant areas in the SAR image;
step 411, after 3-layer stationary wavelet decomposition (SWT) is performed on the input SAR image f s (x, y), 4 sets of low-frequency detail information A j (x, y) and high-frequency information are generated Wherein: a j (x, y) represents low-frequency information obtained by decomposing SAR image at the jth time,The high-frequency information obtained by decomposing the SAR image for the j-th time in the epsilon direction is represented by a letter, epsilon=h represents decomposition in the horizontal direction, epsilon=v represents decomposition in the vertical direction, epsilon=d represents decomposition in the diagonal direction, and j is the number of decomposition times;
step 412, based on the high frequency information of the image High-frequency detail intensity information E s (x, y) of the SAR image is calculated:
wherein I represents taking absolute value;
by normalizing the high-frequency energy and the low-frequency background part, standard high-frequency intensity combined information is obtained And low frequency informationLandslide feature extraction is performed to obtain high-frequency and low-frequency salient features S E (x, y) and S A (x, y):
Step 413, setting low-frequency data obtained by subjecting the SAR image to 3 layers of SWT wavelet transformation and performing 0-1 normalization to obtain a full 1 filling function with the size of m×n, where f s(x,y),ft (x, y) is m×n, and then a landslide characteristic function S N (x, y) with dark characteristic in the SAR image is:
SN(x,y)=[ft(x,y)-fs(x,y)]α
Wherein alpha is a filtering parameter for weakening the influence of the non-target area on the characteristic function;
Step 42, building a landslide target area of the SAR image:
The obtained landslide characteristic functions are weighted and combined to obtain a final SAR image landslide characteristic function S (x, y):
S′(x,y)=[(SE(x,y)+SA(x,y)+SN(x,y))]β
Where β is an adjustment parameter used to highlight the landslide feature region, and the resulting S (x, y) is a value normalized to [0,1 ].
Further, the specific content of establishing the landslide feature region fusion rule in the step 5 is as follows:
substituting the high-frequency characteristic function S (x, y) into a fusion rule, and finally expressing the fusion rule as follows:
Wherein f R/C、fG/C、fB/C represents R, G, B three channels of the optical image respectively, f R/F、fG/F、fB/F represents R, G, B three channels of the fusion image respectively, f Ir is a gray fusion result of the SAR image and the color visible light image I component under the framework of a stable wavelet transformation method, f I is a brightness component of the visible light image, lambda represents the mean ratio of the gray fusion result f Ir and the brightness f I of the visible light image, and the method is used for eliminating the influence of redundant basic colors on the brightness of the fusion result, and the calculation method is as follows:
where mean () represents the arithmetic mean operation of an image.
The invention adopts another technical scheme that: a landslide detection-oriented SAR and optical image fusion system, comprising:
An image preprocessing unit for preprocessing the SAR image and the optical image;
the HIS conversion unit is used for carrying out HIS conversion on the optical image to obtain three components I, H and S;
the energy large fusion unit is used for carrying out stable wavelet transformation and high-frequency component energy large fusion on the I component of the SAR image and the optical image;
the SAR landslide target detection region function building unit is used for respectively carrying out landslide characteristic detection on low-frequency and high-frequency components of the SAR image and gray information of the image, building an SAR landslide target detection region function and partitioning the SAR image;
The image fusion unit establishes landslide feature region fusion rules and realizes image fusion according to a regional fusion strategy;
and a landslide hazard information extraction unit for extracting landslide hazard information based on the fusion image recognition.
The invention has the beneficial effects that:
1. The landslide characteristic function is constructed, so that the landslide target of the image can be accurately judged and analyzed.
The research analyzes the fusion of SAR images and visible light images, and proposes a wavelet energy large fusion algorithm based on landslide targets to perform image fusion, and the fusion strategy is improved at the pixel level and the feature level. According to the characteristic that the landslide characteristic function is insensitive to target noise, the same strategy is selected for the pixel image and the characteristic image: and (3) adopting weighted average for the low-frequency coefficient of the decomposed wavelet, and adopting a collar neighborhood energy measurement rule for the high-frequency coefficient part, so that the fused image and the obvious characteristics are more definite. And on the basis, the background and the details of the obtained fusion image are separated, the details are screened through the obvious information obtained by fusion, and the noise in the fusion image is reduced on the basis of not influencing the background contour information.
2. And constructing a landslide feature detection area function can correctly plan the landslide feature area of the SAR image.
From the perspective of wavelet multi-resolution, the SAR image is subjected to landslide target comprehensive analysis to obtain a landslide feature detection area function, so that the spatial area of the image can be divided into: a landslide-with-target region and a landslide-free target region, wherein the target region has a feature salient region and a feature dark region, and the pixel association of the dark target region with the non-target noise region is analyzed.
3. And the landslide characteristic region fusion algorithm effectively distinguishes spectrum, detail and noise according to the difference of characteristic regions.
The necessity that the salient features must enter the fusion image is considered in the feature area, the problem that the dark features which are easy to ignore during fusion are easy to lose is considered, and the relation between the dark features and the noisy background on the pixel level is analyzed; different weighting coefficients are distributed on different characteristic areas according to different attributes, the problem of influence of noise on the spectrum is further solved when the problem of contradiction between details and the spectrum is compromised, and the effectiveness and the originality of the spectrum are reserved.
In conclusion, the method is reliable and practical, has good adaptability to SAR and optical image fusion for landslide detection, adopts related treatment measures on structure maintenance, noise removal and spectrum preservation, obtains excellent effects, and has good practicability and feasibility.
Drawings
The invention will be further described with reference to the accompanying drawings and examples, in which:
FIG. 1 is a flow chart of the method of the present invention;
FIG. 2 is a basic flow chart of image registration of the present invention;
FIG. 3 is a flow chart of the landslide feature detection noise reduction fusion of the present invention;
FIG. 4 is a flow chart of the landslide target detection area function generation process of the present invention.
Detailed Description
The present invention will be described in further detail with reference to the drawings and examples, in order to make the objects, technical solutions and advantages of the present invention more apparent. It should be understood that the specific embodiments described herein are for purposes of illustration only and are not intended to limit the scope of the invention.
Example 1
As shown in fig. 1, the method for fusing SAR and optical image for landslide detection according to the embodiment of the present invention includes the following steps:
step 1, preprocessing SAR images and optical images;
Step 2, performing HIS (high intensity imaging) transformation on the optical image to obtain three components I, H and S;
step 3, performing stable wavelet transformation and high-frequency component energy large fusion on the I component of the SAR image and the I component of the optical image;
Step4, respectively carrying out landslide characteristic detection on low-frequency and high-frequency components of the SAR image and gray information of the image, establishing a target detection area function of the SAR landslide, and partitioning the SAR image;
Step 5, establishing landslide feature region fusion rules, and realizing image fusion according to a regional fusion strategy;
and 6, identifying and extracting landslide disaster information based on the fusion image.
Further, the specific steps of preprocessing the SAR image in the step1 are as follows:
Step 11, selecting a diffusion function:
selecting a function according to the selection standard of the diffusion function in the anisotropic diffusion model and combining analysis of the property, graph and diffusion intensity of the function And classical diffusion function exp [ - (x/k) 2 ] as the diffusion function in the anisotropic diffusion model.
Step 12, adopting the following diffusion model:
The expression of the diffusion function of the model is:
In the method, in the process of the invention,
X is gradient amplitude I N、IS、IE、IW in 4 directions;
GM is the average value of the local image gradient change value, and the expression is as follows:
step 13, using the relative signal-to-noise ratio as an iteration termination condition:
Wherein:
I k is the result of the image after the kth filtering;
i (k+1) is the result of the image after the k+1th filtering;
omega is the entire area of the image.
The final iteration termination conditions are:
|RSNR(k+1)-RSNRk|/RSNRk≤ε
Wherein epsilon is a threshold value selected in advance, and epsilon is generally more than or equal to 0.01.
When iteration in the iteration process is terminated, an image with relatively smaller noise content can be obtained, so that the noise removing capacity is maximized.
Further, the specific steps of the optical image preprocessing in the step 1 are as follows:
Step 14, correction processing:
optical images selected from satellite ground stations are generally rough processed, and generally, the user only needs to perform fine processing, i.e., geometric correction of the image. There are three schemes for performing geometric correction processing on an image, namely system correction, control point correction and hybrid correction, and the following is briefly described:
And 141, performing system correction, namely substituting the calibration data of the remote sensing sensor, the measured values of the position of the sensor, the satellite attitude and the like into a theoretical correction formula to perform geometric distortion correction.
In step 142, the control point correction is to use some corresponding points between the deformed remote sensing image and the standard map, namely control point data pairs, and use a mathematical model to describe the geometric deformation process of the remote sensing image approximately. And solving a geometric distortion model through geometric control points, and then carrying out geometric correction on the image.
Step 143, the hybrid correction is a staged correction scheme, which firstly uses the theoretical formula of the geometric distortion to perform the geometric correction of the system, and then uses the ground control point to further correct the geometric distortion.
The invention adopts a mixing correction scheme commonly used at present.
Step 15, image enhancement:
The common image enhancement methods include linear transformation, histogram equalization, nonlinear transformation and piecewise linear transformation, and the processing methods are as follows:
in step 151, the linear transformation expands the brightness value range of the image to the whole display value range, so that the light tone area of the obtained image appears lighter, and the dark tone area appears darker. This displays similar image data values as distinguishable different hues.
In step 152, the histogram equalization expands the contrast of the bright areas in the image, and the contrast of the bright areas at the two ends and the contrast of the bright areas at the low brightness in the corresponding original image are compressed.
Step 153, nonlinear transformation divides the brightness value range of the band into a plurality of sections, and performs linear transformation to different degrees according to the sections to determine different weights according to various kinds of histograms. This can enhance the contrast of the target image in a specific luminance value region, such as the contrast between rivers, roads, farms, plants, for interpretation.
Further, the specific steps of performing geometric registration on the SAR and the optical image in the step 1 are as follows:
The definition of the image registration in step 16 is a process of obtaining coordinate transformation parameters between images according to similarity measurement criteria, so that two or more images of the same region acquired from different sensors, different perspectives and different times can be transformed into the same coordinate system. Mathematically, the definition of image registration can be expressed in terms of a mathematical expression:
I1(x1,y1)=g(I2(T(x2,y2)))
Wherein:
(x 1,y1) is the pixel point coordinates in the reference image;
(x 2,y2) is the pixel point coordinates in the unregistered image;
i 1(x1,y1) is the pixel gray (luminance) value in the reference image;
I 2(x2,y2) is the pixel gray (luminance) value in the unregistered image;
g is one-dimensional gray scale transformation between images;
T is the two-dimensional spatial coordinate transformation between the images.
The process of determining T is the registration process.
The general process of image registration in step 17 is that after the multi-sensor data is subjected to strict geometric correction and systematic errors are corrected, images are projected onto the same ground coordinate system, then a small number of control points are selected on each sensor image, and the accurate registration of the images is realized through the multi-step processes of automatically selecting or calculating the similarity among the characteristic points, roughly estimating the positions of the registration points, accurately determining the registration points, estimating registration transformation parameters and the like. Image registration may be seen as a combination of several elements:
feature space: defined as extracting feature sets for matching from reference images and input images
Search space: possible transformation set for establishing correspondence between input features and reference features
Search strategy: for selecting a computable transformation model to step the matching up to the accuracy requirement during processing
Proximity metric: to evaluate matches between input data and reference data defined by a given transformation obtained from the search space.
The image matching method based on the characteristics adopts some remarkable characteristics extracted from the gray level of the image as matching elements, and the characteristics for matching are usually points, lines, areas and the like. The algorithm mainly comprises two steps of feature extraction and feature matching. Before feature matching, features such as points, lines, areas and the like with obvious gray level change are firstly extracted from a plurality of images to be matched to form a feature set. And then selecting the feature pairs with matching relations as much as possible in the feature set corresponding to each image by utilizing a feature matching algorithm. Processing the non-characteristic pixel points by interpolation and other methods, and calculating a corresponding matching relation, so that pixel-by-pixel registration among a plurality of images is realized.
Further, the specific steps of performing the HIS transformation on the optical image in step 2 are:
Step 21, IHS includes three members: luminance (Intensity), hue (Hue), and Saturation (Saturation). The slave IHS may be derived from a transform of the RGB model color space that is not a simple linear-to-relational relationship. The IHS model and the RGB model can be obtained through the corresponding calculation range.
The calculation method from the RGB space to the IHS space comprises the following steps:
According to the difference of the value ranges of the H component, the represented color ranges are different, wherein 0 DEG is red, 120 DEG is green, and 240 DEG is blue, and the chromaticity of 0 DEG to 240 DEG covers all the colors of the visible spectrum. The conversion from IHS cylindrical space model to RGB space model needs to be considered in three cases:
Step 211, when H is more than or equal to 0 and less than 120 degrees, the IHS model corresponds to an RGB model, and the tone area mainly falls on R tone:
Step 212, when 120 degrees is less than or equal to H and less than 240 degrees, the IHS model corresponds to an RGB model, and the tone area mainly falls on the G tone:
step 213, when H is more than or equal to 240 degrees and less than 360 degrees, the IHS model corresponds to an RGB model, and the tone area mainly falls on the tone B:
Further, the specific steps of the stationary wavelet transformation in the step 3 are as follows:
Step 31, inserting 2 j-1 blank data (i.e. filling 0) into the filter coefficient of the j-th layer decomposition to increase the length of the filter, and performing extension on the filter to achieve multi-scale analysis on the image, namely:
Where { h k } represents the low pass filter coefficients; { g k } represents the bandpass filter coefficients.
In step 32, after the smooth wavelet transformation, the new data size obtained will not change in size, and there is a certain redundancy in SWT (smooth wavelet decomposition). Using the SWT algorithm as image analysis, performing convolution filtering on two dimensional directions of an image, and the obtained new image can be expressed as:
where a x,j represents the low-frequency component of the jth decomposition, The high frequency components in the horizontal (in lowercase h), vertical (in lowercase v), and diagonal (in lowercase d) directions of the jth decomposition are respectively represented.
Further, the specific method for large fusion of high-frequency component energy in the step 3 is as follows:
Step 33, high-frequency component energy calculation:
the input SAR image and the optical image I component are subjected to smooth wavelet decomposition (SWT) to obtain low-frequency and high-frequency information of 4 groups of images: a s,j(x,y)、Av,j (x, y), Wherein: a s,j (x, y) and A v,j (x, y) respectively represent low-frequency information obtained by decomposing the SAR image and the optical image for the jth time,AndThe high frequency information obtained by decomposing the SAR image and the optical image in the j-th time in the epsilon direction is shown. The directivity of epsilon is indicated by a number, epsilon=1 indicates decomposition in the horizontal direction, epsilon=2 indicates decomposition in the vertical direction, epsilon=3 indicates decomposition in the diagonal direction, and j is the number of decomposition times.
Step 34, taking a large fusion method:
step 341, sub-graph of high-frequency information according to two groups of images AndCalculating the high-frequency neighborhood energyAndThe energy convolution kernel often selects the base as the side length, and the invention selects 3×3:
Wherein, Represents the epsilon-direction high-frequency neighborhood energy of the SAR image under the jth decomposition,The high frequency neighborhood energy in the epsilon direction of the optical image under the jth decomposition is shown.
Step 342, through analysis of the energy of the neighborhood, selecting energy variation salient wavelet coefficients to form new high-frequency wavelet coefficients to participate in the reconstruction of the back end, wherein the energy measurement large processing strategy is as follows:
Wherein the method comprises the steps of High frequency information which is the fusion result.
Further, the specific method for establishing the landslide feature detection area function in the step 4 is as follows:
step 41, target analysis of SAR images:
from wavelet characteristics, wavelet transformation can process and analyze images in a comprehensive and multi-angle manner through multi-resolution analysis. The magnitude of the wavelet high-frequency coefficient in the SAR image represents the part with larger fluctuation in the image, and the part is often a landslide characteristic significant region in the SAR image.
Step 411, after 3-layer stationary wavelet decomposition (SWT) is performed on the input SAR image f s (x, y), 4 sets of low-frequency detail information A j (x, y) and high-frequency information are generatedWherein: a j (x, y) represents low-frequency information obtained by decomposing SAR image at the jth time,The high-frequency information obtained by decomposing the SAR image for the jth time in the epsilon direction is shown. The directivity of epsilon is expressed by letters, epsilon=h represents decomposition in the horizontal direction, epsilon=v represents decomposition in the vertical direction, epsilon=d represents decomposition in the diagonal direction, and j is the number of decomposition times.
Step 412, based on the high frequency information of the imageHigh-frequency detail intensity information E s (x, y) of the SAR image is calculated: /(I)
Where || denotes taking absolute value.
By normalizing the high frequency energy and the low frequency background part, standard high frequency intensity combination information can be obtainedAnd low frequency informationLandslide feature extraction is performed to obtain high-frequency and low-frequency salient features S E (x, y) and S A (x, y):
The simplest method for extracting the landslide dark features in step 413 is to normalize the low-frequency coefficient of the wavelet of the SAR image, then take the complement transformation, and the characteristics of the SAR image and multiplicative speckle noise, wherein the noise pollution is small at the place where the pixel value in the SAR image is smaller, and the low-frequency part is represented as a slow dark target area after the wavelet transformation. The effect of the bright background color on the extraction of dark objects can be attenuated by a simple exponential transformation. Meanwhile, in order not to influence the spectrum information after fusion, the weight of the dark target in fusion can be weakened through exponential transformation, so that the weight is lower than a landslide characteristic significant region and higher than a non-characteristic region. Assuming that f s(x,y),ft (x, y) with m×n size is a full 1 filling function (matrix) with m×n size, which is obtained by performing 3-layer SWT wavelet transformation on the SAR image, and performing 0-1 normalization, a landslide characteristic function S N (x, y) with dark characteristic in the SAR image is:
SN(x,y)=[ft(x,y)-fs(x,y)]α
Where α is a filter parameter used to attenuate the influence of the non-target region on the feature function, in the present invention the filter parameter takes α=5.
Step 42, building a landslide target area of the SAR image:
The obtained landslide characteristic functions are weighted and combined to obtain a final SAR image landslide characteristic function S (x, y):
S′(x,y)=[(SE(x,y)+SA(x,y)+SN(x,y))]β
Wherein beta is an adjustment parameter used for highlighting landslide feature areas, the adjustment parameter is 0.5 in the invention, and finally the S (x, y) is normalized to a value of [0,1 ].
Further, the specific steps of establishing the landslide feature region fusion rule in the step 5 are as follows:
Step 51, no-feature region fusion rule:
For the non-characteristic areas marked by the characteristic significant areas, the SAR image does not contain obvious characteristic information, the areas often contain unique multiplicative speckle noise in the SAR image, the multiplicative speckle noise can be reversely deduced, and the areas can also have background colors for representing information such as terrain height and the like. Meanwhile, the detail information and the spectrum information belong to a pair of contradictors, so that the spectrum information of the optical image is reserved in the area as a main fusion purpose, and meanwhile, in order to solve the problem that the characteristic dark area is not lost along with the disappeared background color, the area takes the spectrum information of the optical image as a substrate, and the image characteristics of the detail are fused in a gray scale by the remarkable information of the weight, so that the influence of noise on the spectrum information is reduced.
For the no-feature targets of SAR and optical images, the fusion rule is to select the spectral information of the optical images plus the SAR feature information controlled by the weight, and the SAR feature information can be expressed as follows by a formula:
Where f R/C、fG/C、fB/C represents the three channel red, green and blue components of the input optical image respectively, Representing a small amount of background color present in the SAR image, f R/F、fG/F、fB/F represents three channel components of the fused color image result, respectively.
Step 52, feature dark region and feature salient region fusion rule
For the target characteristic region (characteristic dark region and characteristic significant region) of the image, the adopted fusion rule is to add characteristic information on the basis of the information of each channel of the color image. The target characteristic information of the initial day fusion result can be enhanced through the characteristic information screened by the characteristic significant region, the target point is more definite, the texture details are more abundant and clear, and the fusion method is as follows:
Where f I denotes the luminance component of the optical image, which component can be obtained by color model conversion. And f Ir is a gray level fusion result, and can be obtained by using a stable wavelet neighborhood energy large fusion rule. Lambda represents the mean ratio of the gray fusion result f Ir to the brightness f I of the visible light image, and is used for eliminating the influence of redundant basic colors on the brightness of the fusion result, and the calculation method comprises the following steps:
where mean () represents the arithmetic mean operation of an image.
Step 53, unifying fusion rules:
By substituting the high frequency characteristic function S (x, y) into the fusion rule, the final fusion rule can be expressed as:
Wherein f R/C、fG/C、fB/C represents R, G, B three channels of the optical image respectively, f R/F、fG/F、fB/F represents R, G, B three channels of the fusion image respectively, f Ir is a gray fusion result of the SAR image and the I component of the color visible light image under the frame of a stationary wavelet transform method (SWT), and f I is the brightness component of the visible light image.
Example 2
The embodiment provides a SAR and optical image fusion system facing landslide detection, which comprises:
An image preprocessing unit for preprocessing the SAR image and the optical image;
the HIS conversion unit is used for carrying out HIS conversion on the optical image to obtain three components I, H and S;
the energy large fusion unit is used for carrying out stable wavelet transformation and high-frequency component energy large fusion on the I component of the SAR image and the optical image;
the SAR landslide target detection region function building unit is used for respectively carrying out landslide characteristic detection on low-frequency and high-frequency components of the SAR image and gray information of the image, building an SAR landslide target detection region function and partitioning the SAR image;
The image fusion unit establishes landslide feature region fusion rules and realizes image fusion according to a regional fusion strategy;
and a landslide hazard information extraction unit for extracting landslide hazard information based on the fusion image recognition.
In the energy large fusion unit, the specific steps of energy large fusion are as follows:
step 31, high-frequency component energy calculation:
The input SAR image and the optical image I component are subjected to stable wavelet decomposition to obtain low-frequency and high-frequency information of 4 groups of images: a s,j(x,y)、Av,j (x, y), Wherein: a s,j (x, y) and A v,j (x, y) respectively represent low-frequency information obtained by decomposing the SAR image and the optical image for the jth time,AndRespectively representing high-frequency information obtained by decomposing the SAR image and the optical image in the j th time in the epsilon direction; the directivity of epsilon is expressed by a number, epsilon=1 represents decomposition in the horizontal direction, epsilon=2 represents decomposition in the vertical direction, epsilon=3 represents decomposition in the diagonal direction, and j is the number of decomposition times;
step 32, taking a large fusion method:
Step 321, according to the high frequency information of the two groups of images AndRespectively calculating the high-frequency neighborhood energyAnd
Wherein,Represents the epsilon-direction high-frequency neighborhood energy of the SAR image under the jth decomposition,Representing high-frequency neighborhood energy of the optical image in the epsilon direction under the j-th decomposition;
Step 322, through analysis of the energy of the neighborhood, selecting energy variation salient wavelet coefficients to form new high-frequency wavelet coefficients to participate in the reconstruction of the back end, wherein the energy measurement large processing strategy is as follows:
Wherein, High frequency information which is the fusion result.
The building process of the SAR landslide target detection area function is as follows:
step 41, target analysis of SAR images:
The size of the wavelet high-frequency coefficient in the SAR image represents the part with larger fluctuation in the image, and the parts are landslide characteristic significant areas in the SAR image;
step 411, after 3-layer stationary wavelet decomposition (SWT) is performed on the input SAR image f s (x, y), 4 sets of low-frequency detail information A j (x, y) and high-frequency information are generated Wherein: a j (x, y) represents low-frequency information obtained by decomposing SAR image at the jth time,The high-frequency information obtained by decomposing the SAR image for the j-th time in the epsilon direction is represented by a letter, epsilon=h represents decomposition in the horizontal direction, epsilon=v represents decomposition in the vertical direction, epsilon=d represents decomposition in the diagonal direction, and j is the number of decomposition times;
step 412, based on the high frequency information of the image High-frequency detail intensity information E s (x, y) of the SAR image is calculated:
wherein I represents taking absolute value;
by normalizing the high-frequency energy and the low-frequency background part, standard high-frequency intensity combined information is obtained And low frequency informationLandslide feature extraction is performed to obtain high-frequency and low-frequency salient features S E (x, y) and S A (x, y):
Step 413, setting low-frequency data obtained by subjecting the SAR image to 3 layers of SWT wavelet transformation and performing 0-1 normalization to obtain a full 1 filling function with the size of m×n, where f s(x,y),ft (x, y) is m×n, and then a landslide characteristic function S N (x, y) with dark characteristic in the SAR image is:
SN(x,y)=[ft(x,y)-fs(x,y)]α
Wherein alpha is a filtering parameter for weakening the influence of the non-target area on the characteristic function;
Step 42, building a landslide target area of the SAR image:
The obtained landslide characteristic functions are weighted and combined to obtain a final SAR image landslide characteristic function S (x, y):
S′(x,y)=[(SE(x,y)+SA(x,y)+SN(x,y))]β
Where β is an adjustment parameter used to highlight the landslide feature region, and the resulting S (x, y) is a value normalized to [0,1 ].
S' (x, y) is the intermediate calculated quantity of the SAR image landslide characteristic function,
S (x, y) is the landslide feature function of the final SAR image,
S A (x, y) is a low frequency salient feature,
S E (x, y) is a high frequency salient feature,
S N (x, y) is a landslide feature function of feature darkness in the SAR image.
In the image fusion unit, the specific content of the landslide feature area fusion rule is as follows:
substituting the high-frequency characteristic function S (x, y) into a fusion rule, and finally expressing the fusion rule as follows:
Wherein f R/C、fG/C、fB/C represents R, G, B three channels of the optical image respectively, f R/F、fG/F、fB/F represents R, G, B three channels of the fusion image respectively, f Ir is a gray fusion result of the SAR image and the color visible light image I component under the framework of a stable wavelet transformation method, f I is a brightness component of the visible light image, lambda represents the mean ratio of the gray fusion result f Ir and the brightness f I of the visible light image, and the method is used for eliminating the influence of redundant basic colors on the brightness of the fusion result, and the calculation method is as follows:
where mean () represents the arithmetic mean operation of an image.
It will be understood that modifications and variations will be apparent to those skilled in the art from the foregoing description, and it is intended that all such modifications and variations be included within the scope of the following claims.
Claims (8)
1. The SAR and optical image fusion method for landslide detection is characterized by comprising the following steps of:
step 1, preprocessing SAR images and optical images;
Step 2, performing HIS (high intensity imaging) transformation on the optical image to obtain three components I, H and S;
step 3, performing stable wavelet transformation and high-frequency component energy large fusion on the I component of the SAR image and the I component of the optical image;
Step4, respectively carrying out landslide characteristic detection on low-frequency and high-frequency components of the SAR image and gray information of the image, establishing a target detection area function of the SAR landslide, and partitioning the SAR image;
Step 5, establishing landslide feature region fusion rules, and realizing image fusion according to a regional fusion strategy;
step 6, identifying and extracting landslide disaster information based on the fusion image;
the process of establishing the SAR landslide target detection area function in the step 4 is as follows:
step 41, target analysis of SAR images:
The size of the wavelet high-frequency coefficient in the SAR image represents the landslide characteristic significant region in the SAR image;
Step 411, after the input SAR image f s (x, y) is decomposed by 3-layer stationary wavelet to SWT, 4 groups of low-frequency detail information A j (x, y) and high-frequency information are generated Wherein: a j (x, y) represents low-frequency information obtained by decomposing SAR image at the jth time,The high-frequency information obtained by decomposing the SAR image for the j-th time in the epsilon direction is represented by a letter, epsilon=h represents decomposition in the horizontal direction, epsilon=v represents decomposition in the vertical direction, epsilon=d represents decomposition in the diagonal direction, and j is the number of decomposition times;
step 412, based on the high frequency information of the image High-frequency detail intensity information E s (x, y) of the SAR image is calculated:
wherein I represents taking absolute value;
by normalizing the high-frequency energy and the low-frequency background part, standard high-frequency intensity combined information is obtained And low frequency informationLandslide feature extraction is performed to obtain high-frequency and low-frequency salient features S E (x, y) and S A (x, y):
Step 413, setting low-frequency data obtained by subjecting the SAR image to 3 layers of SWT wavelet transformation and performing 0-1 normalization to obtain a full 1 filling function with the size of m×n, where f s(x,y),ft (x, y) is m×n, and then a landslide characteristic function S N (x, y) with dark characteristic in the SAR image is:
SN(x,y)=[ft(x,y)-fs(x,y)]α
Wherein alpha is a filtering parameter for weakening the influence of the non-target area on the characteristic function;
Step 42, building a landslide target area of the SAR image:
The obtained landslide characteristic functions are weighted and combined to obtain a final SAR image landslide characteristic function S (x, y):
S′(x,y)=[(SE(x,y)+SA(x,y)+SN(x,y))]β
Where β is an adjustment parameter used to highlight the landslide feature region, and the resulting S (x, y) is a value normalized to [0,1 ].
2. The landslide detection-oriented SAR and optical image fusion method of claim 1, wherein the specific steps of high-frequency component energy large fusion in step 3 are as follows:
step 31, high-frequency component energy calculation:
The input SAR image and the optical image I component are subjected to stable wavelet decomposition to obtain low-frequency and high-frequency information of 4 groups of images: a s,j(x,y)、Av,j (x, y), Wherein: a s,j (x, y) and A v,j (x, y) respectively represent low-frequency information obtained by decomposing the SAR image and the optical image for the jth time,AndRespectively representing high-frequency information obtained by decomposing the SAR image and the optical image in the j th time in the epsilon direction; the directivity of epsilon is expressed by a number, epsilon=1 represents decomposition in the horizontal direction, epsilon=2 represents decomposition in the vertical direction, epsilon=3 represents decomposition in the diagonal direction, and j is the number of decomposition times;
step 32, taking a large fusion method:
Step 321, according to the high frequency information of the two groups of images AndRespectively calculating the high-frequency neighborhood energyAnd
Wherein,Representing the epsilon-direction high-frequency neighborhood energy of SAR image under jth decomposition,/>, andRepresenting high-frequency neighborhood energy of the optical image in the epsilon direction under the j-th decomposition;
Step 322, through analysis of the energy of the neighborhood, selecting energy variation salient wavelet coefficients to form new high-frequency wavelet coefficients to participate in the reconstruction of the back end, wherein the energy measurement large processing strategy is as follows:
Wherein, High frequency information which is the fusion result.
3. The landslide detection-oriented SAR and optical image fusion method of claim 1, wherein the SAR landslide target detection area function established in step 4 is:
S′(x,y)=[(SE(x,y)+SA(x,y)+SN(x,y))]β
Wherein beta is an adjustment parameter used for highlighting a landslide characteristic target area;
s' (x, y) is the intermediate calculated quantity of the SAR image landslide characteristic function,
S (x, y) is the landslide feature function of the final SAR image,
S A (x, y) is a low frequency salient feature,
S E (x, y) is a high frequency salient feature,
S N (x, y) is a landslide feature function of feature darkness in the SAR image.
4. The landslide detection-oriented SAR and optical image fusion method of claim 1, wherein the specific content of establishing the landslide feature region fusion rule in step 5 is as follows:
substituting the high-frequency characteristic function S (x, y) into a fusion rule, and finally expressing the fusion rule as follows:
Wherein f R/C、fG/C、fB/C represents R, G, B three channels of the optical image respectively, f R/F、fG/F、fB/F represents R, G, B three channels of the fusion image respectively, f Ir is a gray fusion result of the SAR image and the color visible light image I component under the framework of a stable wavelet transformation method, f I is a brightness component of the visible light image, lambda represents the mean ratio of the gray fusion result f Ir and the brightness f I of the visible light image, and the method is used for eliminating the influence of redundant basic colors on the brightness of the fusion result, and the calculation method is as follows:
where mean () represents the arithmetic mean operation of an image.
5. SAR and optical image fusion system towards landslide detection, characterized by comprising:
An image preprocessing unit for preprocessing the SAR image and the optical image;
the HIS conversion unit is used for carrying out HIS conversion on the optical image to obtain three components I, H and S;
the energy large fusion unit is used for carrying out stable wavelet transformation and high-frequency component energy large fusion on the I component of the SAR image and the optical image;
the SAR landslide target detection region function building unit is used for respectively carrying out landslide characteristic detection on low-frequency and high-frequency components of the SAR image and gray information of the image, building an SAR landslide target detection region function and partitioning the SAR image;
The image fusion unit establishes landslide feature region fusion rules and realizes image fusion according to a regional fusion strategy;
landslide hazard information extraction means for identifying and extracting landslide hazard information based on the fused image;
The building process of the SAR landslide target detection area function is as follows:
step 41, target analysis of SAR images:
The size of the wavelet high-frequency coefficient in the SAR image represents the landslide characteristic significant region in the SAR image;
step 411, after 3-layer stationary wavelet decomposition (SWT) is performed on the input SAR image f s (x, y), 4 sets of low-frequency detail information A j (x, y) and high-frequency information are generated Wherein: a j (x, y) represents low-frequency information obtained by decomposing SAR image at the jth time,The high-frequency information obtained by decomposing the SAR image for the j-th time in the epsilon direction is represented by a letter, epsilon=h represents decomposition in the horizontal direction, epsilon=v represents decomposition in the vertical direction, epsilon=d represents decomposition in the diagonal direction, and j is the number of decomposition times;
step 412, based on the high frequency information of the image High-frequency detail intensity information E s (x, y) of the SAR image is calculated:
wherein I represents taking absolute value;
by normalizing the high-frequency energy and the low-frequency background part, standard high-frequency intensity combined information is obtained And low frequency informationLandslide feature extraction is performed to obtain high-frequency and low-frequency salient features S E (x, y) and S A (x, y):
Step 413, setting low-frequency data obtained by subjecting the SAR image to 3 layers of SWT wavelet transformation and performing 0-1 normalization to obtain a full 1 filling function with the size of m×n, where f s(x,y),ft (x, y) is m×n, and then a landslide characteristic function S N (x, y) with dark characteristic in the SAR image is:
SN(x,y)=[ft(x,y)-fs(x,y)]α
Wherein alpha is a filtering parameter for weakening the influence of the non-target area on the characteristic function;
Step 42, building a landslide target area of the SAR image:
The obtained landslide characteristic functions are weighted and combined to obtain a final SAR image landslide characteristic function S (x, y):
s′(x,y)=[(SE(x,y)+sA(x,y)+sN(x,y))]β
Where β is an adjustment parameter used to highlight the landslide feature region, and the resulting S (x, y) is a value normalized to [0,1 ].
6. The landslide detection-oriented SAR and optical image fusion system of claim 5, wherein the energy-based fusion unit comprises the following steps:
step 31, high-frequency component energy calculation:
The input SAR image and the optical image I component are subjected to stable wavelet decomposition to obtain low-frequency and high-frequency information of 4 groups of images: a s,j(x,y)、Av,j (x, y), Wherein: a s,j (x, y) and A v,j (x, y) respectively represent low-frequency information obtained by decomposing the SAR image and the optical image for the jth time,AndRespectively representing high-frequency information obtained by decomposing the SAR image and the optical image in the j th time in the epsilon direction; the directivity of epsilon is expressed by a number, epsilon=1 represents decomposition in the horizontal direction, epsilon=2 represents decomposition in the vertical direction, epsilon=3 represents decomposition in the diagonal direction, and j is the number of decomposition times;
step 32, taking a large fusion method:
Step 321, according to the high frequency information of the two groups of images AndRespectively calculating the high-frequency neighborhood energyAnd
Wherein,Representing the epsilon-direction high-frequency neighborhood energy of SAR image under jth decomposition,/>, andRepresenting high-frequency neighborhood energy of the optical image in the epsilon direction under the j-th decomposition;
Step 322, through analysis of the energy of the neighborhood, selecting energy variation salient wavelet coefficients to form new high-frequency wavelet coefficients to participate in the reconstruction of the back end, wherein the energy measurement large processing strategy is as follows:
Wherein, High frequency information which is the fusion result.
7. The landslide detection-oriented SAR and optical image fusion system of claim 5, wherein in the SAR landslide target detection area function creating unit, the SAR landslide target detection area function is:
S′(x,y)=[(SE(x,y)+SA(x,y)+SN(x,y))]β
Wherein beta is an adjustment parameter used for highlighting a landslide characteristic target area;
s' (x, y) is the intermediate calculated quantity of the SAR image landslide characteristic function,
S (x, y) is the landslide feature function of the final SAR image,
S A (x, y) is a low frequency salient feature,
S E (x, y) is a high frequency salient feature,
S N (x, y) is a landslide feature function of feature darkness in the SAR image.
8. The landslide detection-oriented SAR and optical image fusion system according to claim 5, wherein in the image fusion unit, the specific contents of the landslide feature region fusion rule are:
substituting the high-frequency characteristic function S (x, y) into a fusion rule, and finally expressing the fusion rule as follows:
Wherein f R/C、fG/C、fB/C represents R, G, B three channels of the optical image respectively, f R/F、fG/F、fB/F represents R, G, B three channels of the fusion image respectively, f Ir is a gray fusion result of the SAR image and the color visible light image I component under the framework of a stable wavelet transformation method, f I is a brightness component of the visible light image, lambda represents the mean ratio of the gray fusion result f Ir and the brightness f I of the visible light image, and the method is used for eliminating the influence of redundant basic colors on the brightness of the fusion result, and the calculation method is as follows:
where mean () represents the arithmetic mean operation of an image.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202011045558.3A CN112307901B (en) | 2020-09-28 | 2020-09-28 | SAR and optical image fusion method and system for landslide detection |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202011045558.3A CN112307901B (en) | 2020-09-28 | 2020-09-28 | SAR and optical image fusion method and system for landslide detection |
Publications (2)
Publication Number | Publication Date |
---|---|
CN112307901A CN112307901A (en) | 2021-02-02 |
CN112307901B true CN112307901B (en) | 2024-05-10 |
Family
ID=74489326
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202011045558.3A Active CN112307901B (en) | 2020-09-28 | 2020-09-28 | SAR and optical image fusion method and system for landslide detection |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN112307901B (en) |
Families Citing this family (8)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN113076991B (en) * | 2021-03-30 | 2024-03-08 | 中国人民解放军93114部队 | Nonlinear integration algorithm-based multi-target information comprehensive processing method and device |
CN113538306B (en) * | 2021-06-15 | 2024-02-13 | 西安电子科技大学 | SAR image and low-resolution optical image multi-image fusion method |
CN115060208A (en) * | 2022-06-30 | 2022-09-16 | 国网山东省电力公司电力科学研究院 | Power transmission and transformation line geological disaster monitoring method and system based on multi-source satellite fusion |
CN115236655B (en) | 2022-09-01 | 2022-12-20 | 成都理工大学 | Landslide identification method, system, equipment and medium based on fully-polarized SAR |
CN115525727A (en) * | 2022-10-14 | 2022-12-27 | 昆明理工大学 | Agile power transmission line point cloud modeling and analyzing system |
CN117848979A (en) * | 2023-04-21 | 2024-04-09 | 江苏鲸天科技有限公司 | Intelligent equipment environment remote sensing monitoring system and method |
CN116452936B (en) * | 2023-04-22 | 2023-09-29 | 安徽大学 | Rotation target detection method integrating optics and SAR image multi-mode information |
CN118366059A (en) * | 2024-06-20 | 2024-07-19 | 山东锋士信息技术有限公司 | Crop water demand calculating method based on optical and SAR data fusion |
Citations (11)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN101510309A (en) * | 2009-03-30 | 2009-08-19 | 西安电子科技大学 | Segmentation method for improving water parting SAR image based on compound wavelet veins region merge |
JP5636085B1 (en) * | 2013-12-27 | 2014-12-03 | アジア航測株式会社 | Single-polarization SAR color image creation device |
CN105160648A (en) * | 2014-11-26 | 2015-12-16 | 中国人民解放军第二炮兵工程大学 | Radar target and shadow segmentation method based on wavelet and constant false alarm rate |
CN105809194A (en) * | 2016-03-08 | 2016-07-27 | 华中师范大学 | Method for translating SAR image into optical image |
CN106600572A (en) * | 2016-12-12 | 2017-04-26 | 长春理工大学 | Adaptive low-illumination visible image and infrared image fusion method |
CN106960430A (en) * | 2017-03-17 | 2017-07-18 | 西安电子科技大学 | Based on subregional SAR image and color visible image fusion method |
CN108765359A (en) * | 2018-05-31 | 2018-11-06 | 安徽大学 | Fusion method of hyperspectral remote sensing image and full-color image based on JSK model and NSCT technology |
CN109409292A (en) * | 2018-10-26 | 2019-03-01 | 西安电子科技大学 | The heterologous image matching method extracted based on fining characteristic optimization |
CN109613513A (en) * | 2018-12-20 | 2019-04-12 | 长安大学 | A kind of potential landslide automatic identifying method of optical remote sensing for taking InSAR deformation into account |
CN111178388A (en) * | 2019-12-05 | 2020-05-19 | 上海交通大学 | Partial discharge phase distribution detection method based on NSCT photoelectric fusion atlas |
KR102086323B1 (en) * | 2019-09-30 | 2020-05-26 | 대한민국 | Method for providing automatic monitoring service with continuity of sentinel satellite imagery based on permanent scatterer interferometric synthetic aperture radar |
Family Cites Families (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
TWI353561B (en) * | 2007-12-21 | 2011-12-01 | Ind Tech Res Inst | 3d image detecting, editing and rebuilding system |
-
2020
- 2020-09-28 CN CN202011045558.3A patent/CN112307901B/en active Active
Patent Citations (11)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN101510309A (en) * | 2009-03-30 | 2009-08-19 | 西安电子科技大学 | Segmentation method for improving water parting SAR image based on compound wavelet veins region merge |
JP5636085B1 (en) * | 2013-12-27 | 2014-12-03 | アジア航測株式会社 | Single-polarization SAR color image creation device |
CN105160648A (en) * | 2014-11-26 | 2015-12-16 | 中国人民解放军第二炮兵工程大学 | Radar target and shadow segmentation method based on wavelet and constant false alarm rate |
CN105809194A (en) * | 2016-03-08 | 2016-07-27 | 华中师范大学 | Method for translating SAR image into optical image |
CN106600572A (en) * | 2016-12-12 | 2017-04-26 | 长春理工大学 | Adaptive low-illumination visible image and infrared image fusion method |
CN106960430A (en) * | 2017-03-17 | 2017-07-18 | 西安电子科技大学 | Based on subregional SAR image and color visible image fusion method |
CN108765359A (en) * | 2018-05-31 | 2018-11-06 | 安徽大学 | Fusion method of hyperspectral remote sensing image and full-color image based on JSK model and NSCT technology |
CN109409292A (en) * | 2018-10-26 | 2019-03-01 | 西安电子科技大学 | The heterologous image matching method extracted based on fining characteristic optimization |
CN109613513A (en) * | 2018-12-20 | 2019-04-12 | 长安大学 | A kind of potential landslide automatic identifying method of optical remote sensing for taking InSAR deformation into account |
KR102086323B1 (en) * | 2019-09-30 | 2020-05-26 | 대한민국 | Method for providing automatic monitoring service with continuity of sentinel satellite imagery based on permanent scatterer interferometric synthetic aperture radar |
CN111178388A (en) * | 2019-12-05 | 2020-05-19 | 上海交通大学 | Partial discharge phase distribution detection method based on NSCT photoelectric fusion atlas |
Non-Patent Citations (6)
Title |
---|
"基于空时建模的遥感影像变化检测方法与应用";张普照;《中国博士学位论文全文数据库 工程科技Ⅱ辑》;20200715(第7期);C028-8 * |
"Determination of Optimum Tie Point Interval for SAR Image Coregistration by Decomposing Autocorrelation Coefficient";Zou, Weibao等;《IEEE Transactions on Geoscience and Remote Sensing》;20190220;第57卷(第7期);第5067-5084页 * |
"SAR图像去噪、分割及目标检测方法研究";郝亚冰;《中国优秀硕士学位论文全文数据库 信息科技辑》;20130415(第4期);I136-795 * |
"利用纹理特征的SAR与光学图像融合方法研究";卜丽静;《测绘工程》;20150525;第24卷(第5期);第5-10页 * |
"基于光学和SAR遥感影像融合的典型目标检测识别研究";陈稳;《中国优秀硕士学位论文全文数据库 工程科技Ⅱ辑》;20200215(第2期);C028-218 * |
Information Fusion of Optical Image and SAR Image Based on DEM";Y. Zhang等;《2019 IEEE International Conference on Signal, Information and Data Processing (ICSIDP)》;20200821;第1-5页 * |
Also Published As
Publication number | Publication date |
---|---|
CN112307901A (en) | 2021-02-02 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN112307901B (en) | SAR and optical image fusion method and system for landslide detection | |
US4912770A (en) | Method of detecting change using image | |
CN102147920B (en) | Shadow detection method for high-resolution remote sensing image | |
CN107610164B (en) | High-resolution four-number image registration method based on multi-feature mixing | |
CN111553922B (en) | Automatic cloud detection method for satellite remote sensing image | |
CN114937038B (en) | Usability-oriented remote sensing image quality evaluation method | |
CN114118144A (en) | Anti-interference accurate aerial remote sensing image shadow detection method | |
CN105405138B (en) | Waterborne target tracking based on conspicuousness detection | |
CN117575953B (en) | Detail enhancement method for high-resolution forestry remote sensing image | |
CN108898132A (en) | A kind of terahertz image dangerous material recognition methods based on Shape context description | |
CN111008664A (en) | Hyperspectral sea ice detection method based on space-spectrum combined characteristics | |
CN117314811A (en) | SAR-optical image fusion method based on hybrid model | |
CN112964643B (en) | Method for correcting landform falling shadow of visible light wave band of remote sensing image | |
CN116912691A (en) | Remote sensing image change detection method based on multi-feature evidence fusion and structural similarity | |
CN116051983A (en) | Multispectral remote sensing image water body extraction method oriented to multiservice system fusion | |
CN113902759B (en) | Space-spectrum information combined satellite-borne hyperspectral image segmentation and clustering method | |
CN113284066B (en) | Automatic cloud detection method and device for remote sensing image | |
CN113506230B (en) | Photovoltaic power station aerial image dodging processing method based on machine vision | |
CN115311556A (en) | Remote sensing image processing method and system for natural resource management | |
CN114049571A (en) | Method and device for extracting water body area of hyperspectral image and electronic equipment | |
CN113436123B (en) | High-resolution SAR and low-resolution multispectral image fusion method based on cloud removal-resolution improvement cooperation | |
CN106778774B (en) | High-resolution remote sensing image artificial ground feature contour detection method | |
CN113743373A (en) | High-resolution remote sensing image cropland change detection device and method based on deep learning | |
CN113657351A (en) | High-resolution remote sensing image forest and grass change detection device and method based on deep learning | |
CN114565653A (en) | Heterogeneous remote sensing image matching method with rotation change and scale difference |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |