CN110728706B - SAR image fine registration method based on deep learning - Google Patents
SAR image fine registration method based on deep learning Download PDFInfo
- Publication number
- CN110728706B CN110728706B CN201910943154.7A CN201910943154A CN110728706B CN 110728706 B CN110728706 B CN 110728706B CN 201910943154 A CN201910943154 A CN 201910943154A CN 110728706 B CN110728706 B CN 110728706B
- Authority
- CN
- China
- Prior art keywords
- neural network
- image
- sar
- sub
- function
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active
Links
- 238000000034 method Methods 0.000 title claims abstract description 34
- 238000013135 deep learning Methods 0.000 title claims abstract description 6
- 230000006870 function Effects 0.000 claims abstract description 83
- 238000013528 artificial neural network Methods 0.000 claims abstract description 50
- 238000003062 neural network model Methods 0.000 claims abstract description 9
- 238000012549 training Methods 0.000 claims abstract description 4
- 230000004913 activation Effects 0.000 claims description 15
- 238000013527 convolutional neural network Methods 0.000 claims description 12
- 238000012937 correction Methods 0.000 claims description 11
- 238000005259 measurement Methods 0.000 claims description 6
- 238000011176 pooling Methods 0.000 claims description 6
- 238000013519 translation Methods 0.000 claims description 6
- 238000004422 calculation algorithm Methods 0.000 claims description 4
- 238000012886 linear function Methods 0.000 claims description 3
- OAICVXFJPJFONN-UHFFFAOYSA-N Phosphorus Chemical compound [P] OAICVXFJPJFONN-UHFFFAOYSA-N 0.000 claims 1
- 230000000007 visual effect Effects 0.000 claims 1
- 230000008859 change Effects 0.000 abstract description 3
- 238000001514 detection method Methods 0.000 abstract description 3
- 230000004927 fusion Effects 0.000 abstract description 3
- 238000003384 imaging method Methods 0.000 description 4
- 230000008569 process Effects 0.000 description 4
- 238000012545 processing Methods 0.000 description 4
- 238000004088 simulation Methods 0.000 description 4
- 238000005516 engineering process Methods 0.000 description 3
- 238000011524 similarity measure Methods 0.000 description 3
- 230000000694 effects Effects 0.000 description 2
- 238000012986 modification Methods 0.000 description 2
- 230000004048 modification Effects 0.000 description 2
- 238000004364 calculation method Methods 0.000 description 1
- 238000010276 construction Methods 0.000 description 1
- 238000013461 design Methods 0.000 description 1
- 238000011161 development Methods 0.000 description 1
- 238000010586 diagram Methods 0.000 description 1
- 238000000605 extraction Methods 0.000 description 1
- 230000006872 improvement Effects 0.000 description 1
- 238000012544 monitoring process Methods 0.000 description 1
- 238000012216 screening Methods 0.000 description 1
- 238000012360 testing method Methods 0.000 description 1
- 230000009466 transformation Effects 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/30—Determination of transform parameters for the alignment of images, i.e. image registration
- G06T7/33—Determination of transform parameters for the alignment of images, i.e. image registration using feature-based methods
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T5/00—Image enhancement or restoration
- G06T5/80—Geometric correction
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/10—Image acquisition modality
- G06T2207/10032—Satellite or aerial image; Remote sensing
- G06T2207/10044—Radar image
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/20—Special algorithmic details
- G06T2207/20081—Training; Learning
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/20—Special algorithmic details
- G06T2207/20084—Artificial neural networks [ANN]
Landscapes
- Engineering & Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Theoretical Computer Science (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Image Analysis (AREA)
- Image Processing (AREA)
Abstract
本发明公开了一种基于深度学习的SAR图像精准配准方法,主要解决传统方法无法校正局部畸变和耗时的问题。其实现方案为:1)获取训练数据集;2)构建用于SAR图像精细配准的神经网络;3)构造用于SAR图像精细配准的神经网络的模型的损失函数;4)利用训练数据集对用于SAR图像精细配准的神经网络进行训练,得到训练好的网络模型;5)将待配准的SAR图像和作为参考的SAR图像输入到训练好的网络模型中,得到配准好的SAR图像。本发明能够校正SAR图像间的整体形变和局部畸变,提高了配准性能,加快了配准速度,可用于SAR图像融合和变化检测。
The invention discloses a deep learning-based precise registration method for SAR images, which mainly solves the problems that traditional methods cannot correct local distortion and are time-consuming. The implementation scheme is: 1) acquiring a training data set; 2) constructing a neural network for fine registration of SAR images; 3) constructing a loss function of a neural network model for fine registration of SAR images; 4) using training data 5) Input the SAR image to be registered and the SAR image used as a reference into the trained network model to obtain a well-registered SAR image. SAR image. The invention can correct the overall deformation and local distortion between SAR images, improve the registration performance, speed up the registration speed, and can be used for SAR image fusion and change detection.
Description
技术领域technical field
本发明属于图像处理技术领域,特别涉及一种SAR图像精细配准方法,可用于SAR图像融合和变化检测。The invention belongs to the technical field of image processing, in particular to a method for fine registration of SAR images, which can be used for SAR image fusion and change detection.
背景技术Background technique
合成孔径雷达SAR是一种主动式微波成像系统,具有在不同气候和光照条件下对地面、海面进行全天候的观察能力,在地质资源勘探、海洋监测和城市规划等诸多应用中发挥了重要的作用。随着SAR成像技术的不断发展,SAR成像系统获取了大量有价值的对地观测数据。在SAR图像处理中,往往需要对两幅或者多幅SAR图像进行分析处理,如SAR图像融合和SAR图像变化检测,而SAR图像配准技术正是这些图像处理任务的前提。Synthetic Aperture Radar SAR is an active microwave imaging system that has the ability to observe all-weather ground and sea surfaces under different climate and light conditions. It has played an important role in many applications such as geological resource exploration, ocean monitoring and urban planning. . With the continuous development of SAR imaging technology, the SAR imaging system has acquired a large amount of valuable earth observation data. In SAR image processing, it is often necessary to analyze and process two or more SAR images, such as SAR image fusion and SAR image change detection, and SAR image registration technology is the premise of these image processing tasks.
目前SAR图像的配准方法可分为两大类。第一类是基于区域的方法,此类方法基于互相关或互信息等相似性测度建立图像间的匹配关系来实现SAR的图像的配准,然而此类方法十分耗时、配准精度不高。第二类方法是基于特征的方法,此类方法通过比较局部不变特征描述子间的距离并设定阈值筛选,从而建立匹配关系实现SAR图像配准。此类虽然相比于第一类方法速度更快、配准精度更高,但是仍然存在耗时的问题。At present, the registration methods of SAR images can be divided into two categories. The first category is the region-based method, which establishes the matching relationship between images based on similarity measures such as cross-correlation or mutual information to realize the registration of SAR images. However, such methods are very time-consuming and the registration accuracy is not high. . The second type of method is the feature-based method, which establishes a matching relationship to achieve SAR image registration by comparing the distances between local invariant feature descriptors and setting a threshold for screening. Although this kind of method is faster and has higher registration accuracy than the first kind of method, it still has the problem of time-consuming.
Dellinger在其发表的论文“SAR-SIFT:A SIFT-like Algorithm for SARImages”《IEEE Transaction on Geoscience&Remote Sensing》,2015,53(1):453-466)中提出了一种典型的基于特征的SAR图像配准方法,该方法首先构建SAR-Harris尺度空间,然后在尺度空间上寻找极值点,并对极值点进行筛选找出稳定特征点,接着根据特征点生成基于合成孔径雷达的尺度不变特征转换SAR-SIFT特征描述子,最后利用最邻近法删除错误匹配点对。该方法的不足之处是,在建立尺度空间和获得特征描述子的过程中计算量较大,耗时比较严重。Dellinger proposed a typical feature-based SAR image in his paper "SAR-SIFT: A SIFT-like Algorithm for SARImages" "IEEE Transaction on Geoscience&Remote Sensing", 2015, 53(1):453-466) Registration method, this method first constructs the SAR-Harris scale space, then searches for extreme points in the scale space, and filters the extreme points to find stable feature points, and then generates a scale-invariant synthetic aperture radar based on the feature points. The feature transforms the SAR-SIFT feature descriptor, and finally uses the nearest neighbor method to delete the wrong matching point pairs. The disadvantage of this method is that in the process of establishing the scale space and obtaining the feature descriptor, the amount of calculation is large, and the time-consuming is relatively serious.
传统的SAR图像配准都是通过估计图像间的几何变换参数,对图像整体进行一致校正。然而实际的两幅SAR图像间除整体的形变之外,还因为观测角度的差异存在着局部的畸变,这些局部畸变的存在不利于后续的SAR图像处理,但传统的两种SAR图像配准无法校正这些局部的畸变,影响配准性能和配准速度的提高。The traditional SAR image registration is to perform consistent correction on the whole image by estimating the geometric transformation parameters between the images. However, in addition to the overall deformation between the actual two SAR images, there are also local distortions due to the difference in the observation angle. The existence of these local distortions is not conducive to the subsequent SAR image processing, but the traditional two SAR image registration cannot Correcting these local distortions affects the registration performance and the improvement of registration speed.
发明内容SUMMARY OF THE INVENTION
本发明的目的在于针对上述现有技术的不足,提出一种基于深度学习的SAR图像精细配准方法,以校正SAR图像间的整体形变和局部畸变,提高配准性能,加快配准速度。The purpose of the present invention is to propose a deep learning-based fine registration method for SAR images to correct the overall deformation and local distortion between SAR images, improve the registration performance, and speed up the registration speed.
为实现上述目的,本发明技术方案的实现步骤包括如下:To achieve the above object, the implementation steps of the technical solution of the present invention include the following:
(1)获取多幅从不同视角观测同一场景得到的有重叠区域的SAR图像,得到数据集Φ;(1) Acquire multiple SAR images with overlapping areas obtained by observing the same scene from different perspectives, and obtain a data set Φ;
(2)构建用于SAR图像精细配准的神经网络:(2) Construct a neural network for fine registration of SAR images:
(2a)构建用于校正图像间整体形变的子卷积神经网络,该子卷积神经网络依次由八个卷积层和一个全局平均池化层组成,其中前七个卷积层激活函数采用ReLU函数,第八层卷积层的输出3个特征图,激活函数为线性函数,全局平均池化层对每个特征图中所有值进行平均,得到分别对应图像间几何形变的水平平移量、垂直平移量和旋转角度这三个输出值,根据这三个输出值采用二次线性插值法对待配准图像的进行全局校正;(2a) Constructing a sub-convolutional neural network for correcting the overall deformation between images. The sub-convolutional neural network consists of eight convolutional layers and a global average pooling layer in turn, in which the activation functions of the first seven convolutional layers adopt The ReLU function, the output of the eighth convolutional layer has three feature maps, the activation function is a linear function, and the global average pooling layer averages all the values in each feature map to obtain the horizontal translation corresponding to the geometric deformation between images, respectively. The three output values of vertical translation and rotation angle are globally corrected by quadratic linear interpolation method according to these three output values;
(2b)构建用于校正SAR图像间局部形变的子残差神经网络,该子残差神经网络依次由激活函数是ReLU函数的八个卷积层和四个反卷积层组成,其中前三个反卷积层后的激活函数同是ReLU函数,第四个反卷积层后的激活函数是Tanh函数;根据该子残差神经网络输出1张特征图和参考图像对应像素值的偏移量u,采用二次线性插值法对经过子卷积神经网络校正后的图像进行局部畸变校正;(2b) Constructing a sub-residual neural network for correcting local deformation between SAR images. The sub-residual neural network consists of eight convolutional layers and four deconvolutional layers whose activation function is the ReLU function in turn. The first three The activation function after the first deconvolution layer is the same as the ReLU function, and the activation function after the fourth deconvolution layer is the Tanh function; according to the sub-residual neural network, a feature map and the offset of the corresponding pixel value of the reference image are output The quantity u is used to correct the local distortion of the image corrected by the sub-convolutional neural network by using the quadratic linear interpolation method;
(2c)将步骤(2a)和(2b)构建的两个子神经网络依次连接,组合成SAR图像精细配准神经网络;(2c) connecting the two sub-neural networks constructed in steps (2a) and (2b) in turn to form a fine registration neural network for SAR images;
(2d)构建SAR图像精细配准的神经网络的损失函数Loss:(2d) Construct the loss function Loss of the neural network for fine registration of SAR images:
(2d1)构造子卷积神经网络的损失函数:lossglo=||x-y||1+(1-SSIM(x,y))(2d1) Construct the loss function of the sub-convolutional neural network: loss glo =||xy|| 1 +(1-SSIM(x,y))
(2d2)构造子残差神经网络的损失函数:(2d2) Construct the loss function of the sub-residual neural network:
(2d3)将(2d1)与(2d2)构造的损失函数相加,得到SAR图像精细配准的神经网络的损失函数:Loss=lossglo+lossloc,(2d3) Add the loss functions constructed by (2d1) and (2d2) to obtain the loss function of the neural network for fine registration of SAR images: Loss=loss glo +loss loc ,
式中,x是子卷积神经网络得到的校正图像,y是参考图像,x1是残差神经网络得到的校正图像,||·||1表示1-范数,SSIM(·)是结构相似性函数,和分别表示沿水平和垂直方向的局部微分;In the formula, x is the corrected image obtained by the sub-convolutional neural network, y is the reference image, x 1 is the corrected image obtained by the residual neural network, ||·|| 1 represents the 1-norm, and SSIM(·) is the structure similarity function, and represent the local differential along the horizontal and vertical directions, respectively;
(3)利用(1)获得的数据集Φ对(2)构建的神经网络进行训练,得到配准好的图像:(3) Use the dataset Φ obtained in (1) to train the neural network constructed in (2) to obtain a registered image:
(3a)设学习率参数为0.0001;(3a) Set the learning rate parameter to 0.0001;
(3b)从数据集Φ随机抽取两张SAR图像,一张作待配准图像,一张作为参考图像,并将两张图像输入到用于SAR图像精细配准的神经网络中;(3b) Randomly extract two SAR images from the dataset Φ, one as the image to be registered and one as the reference image, and input the two images into the neural network for fine registration of SAR images;
(3c)采用后向传播算法对(2)构建的SAR图像精细配准的神经网络的权值进行更新,以减少Loss函数值;(3c) Using the back-propagation algorithm to update the weights of the neural network for fine registration of the SAR image constructed in (2) to reduce the value of the Loss function;
(3d)重复步骤3b)和3c),直至SAR图像精细配准的神经网络的损失函数Loss收敛,得到训练好的SAR图像精细配准的神经网络模型;(3d) Repeat steps 3b) and 3c) until the loss function Loss of the neural network for fine registration of SAR images converges, and the trained neural network model for fine registration of SAR images is obtained;
(3e)将参考图像和需要待配准的图像一同输入到训练好的神经网络模型中,得到配准好的SAR图像。(3e) Input the reference image and the image to be registered into the trained neural network model together to obtain the registered SAR image.
本发明与现有技术相比具有以下优点:Compared with the prior art, the present invention has the following advantages:
1.本发明采用深度学习的方法,构建网络模型和损失函数,智能地实现了SAR图像的精细配准,能够校正SAR图像间的整体形变和局部畸变,提高了配准性能。1. The present invention adopts a deep learning method to construct a network model and a loss function, intelligently realizes the fine registration of SAR images, can correct the overall deformation and local distortion between SAR images, and improves the registration performance.
2.本发明由于设计了一种适用于SAR图像配准的端到端神经网络模型,输入待配准的SAR图像和作为参考的SAR图像到该神经网络模型中,即可输出得到配准好的SAR图像,避免了传统复杂的特征描述子提取过程,简化了操作过程,加快了配准速度。2. Since the present invention designs an end-to-end neural network model suitable for SAR image registration, the SAR image to be registered and the SAR image used as a reference are input into the neural network model, and the output can be well-registered. The SAR image can avoid the traditional complex feature descriptor extraction process, simplify the operation process and speed up the registration speed.
附图说明Description of drawings
图1是本发明的实现流程图;Fig. 1 is the realization flow chart of the present invention;
图2是用本发明进行SAR图像配准的仿真结果图。Fig. 2 is a simulation result diagram of SAR image registration using the present invention.
具体实施方式Detailed ways
下面结合附图对本发明的实施例和效果做进一步描述。The embodiments and effects of the present invention will be further described below with reference to the accompanying drawings.
参照图1,本发明的实施步骤如下:1, the implementation steps of the present invention are as follows:
步骤一,获取网络训练与测试的数据集。Step 1: Obtain the data sets for network training and testing.
选取特定场景,对场景发射信号脉冲,信号反射后进入雷达接收机,根据SAR成像技术,获取多幅从不同视角观测到的同一场景SAR图像,由这些图像组成数据集:其中表示获取的第i张大小为m×n的图片,N表示图片总数。Select a specific scene, transmit signal pulses to the scene, and the signal will enter the radar receiver after reflection. According to the SAR imaging technology, multiple SAR images of the same scene observed from different perspectives are obtained, and these images form a dataset: in Indicates the acquired i-th image of size m×n, where N represents the total number of images.
步骤二,构建用于SAR图像精细配准的神经网络模型。The second step is to construct a neural network model for fine registration of SAR images.
该网络由用于校正SAR图像间的整体形变的子卷积神经网络和用于校正SAR图像间局部形变的子残差神经网络依次组成。The network consists of a sub-convolutional neural network for correcting the overall deformation between SAR images and a sub-residual neural network for correcting the local deformation between SAR images.
2a)构建校正SAR图像间的整体形变的子卷积神经网络,该网络依次由八个卷积层和一个全局平均池化层组成,前七层卷积层输出特征图个数分别为32、64、128、256、256、256、256,卷积核尺寸分别为7×7、5×5、3×3、3×3、3×3、3×3、3×3,步长均为2×2,激活函数是ReLU函数,第8层卷积层的输出特征图个数为3,卷积核尺寸均为3×3,步长均为2×2,激活函数是线性函数;全局平均池化层对每个特征图中所有值进行平均,得到三个输出值,分别对应图像间几何形变的水平平移量、垂直平移量和旋转角度,根据这三个输出值采用二次线性插值法对待配准图像的进行全局校正;2a) Construct a sub-convolutional neural network that corrects the overall deformation between SAR images. The network consists of eight convolutional layers and a global average pooling layer in turn. The number of output feature maps of the first seven convolutional layers is 32, 64, 128, 256, 256, 256, 256, the convolution kernel sizes are 7×7, 5×5, 3×3, 3×3, 3×3, 3×3, 3×3, and the steps are 2×2, the activation function is the ReLU function, the number of output feature maps of the 8th convolutional layer is 3, the size of the convolution kernel is 3×3, the step size is 2×2, and the activation function is a linear function; global The average pooling layer averages all the values in each feature map, and obtains three output values, which correspond to the horizontal translation, vertical translation and rotation angle of the geometric deformation between the images. Quadratic linear interpolation is used according to these three output values. method for global correction of images to be registered;
2b)构建校正SAR图像间局部形变的子残差神经网络,该网络依次由八个卷积层和四个反卷积层组成,前四层卷积层输出特征图个数分别为32、64、128、256,卷积核尺寸分别为7×7、5×5、3×3、3×3,步长均为2×2,激活函数是ReLU函数,紧接着的四层卷积层输出特征图个数均为256,卷积核尺寸均为3×3,步长均为1×1,激活函数是ReLU函数,之后的三个反卷积层的输出特征图个数分别为256、128、64,卷积核尺寸分别为3×3、3×3、5×5,步长均为2×2,激活函数是ReLU函数,最后的一个反卷积层的输出特征图个数为1,卷积核尺寸均为7×7,步长为2×2,激活函数是Tanh函数。该网络的输出结果简记为u,表示两幅图像间对应像素值的偏移量,根据该偏移量采用二次线性插值法对经过用于校正SAR图像间的整体形变的子卷积神经网络校正后的图像进行局部畸变校正。2b) Construct a sub-residual neural network for correcting local deformation between SAR images. The network consists of eight convolutional layers and four deconvolutional layers in turn. The number of output feature maps of the first four convolutional layers is 32 and 64, respectively. , 128, 256, the size of the convolution kernel is 7×7, 5×5, 3×3, 3×3, the step size is 2×2, the activation function is the ReLU function, and the next four layers of convolution layer output The number of feature maps is 256, the size of the convolution kernel is 3 × 3, the stride is 1 × 1, the activation function is the ReLU function, and the number of output feature maps of the next three deconvolution layers is 256, 128, 64, the convolution kernel size is 3 × 3, 3 × 3, 5 × 5, the stride is 2 × 2, the activation function is the ReLU function, and the number of output feature maps of the last deconvolution layer is 1. The size of the convolution kernel is 7×7, the stride is 2×2, and the activation function is the Tanh function. The output result of the network is abbreviated as u, which represents the offset of the corresponding pixel values between the two images. The image after network correction is subjected to local distortion correction.
步骤三,构造用于SAR图像精细配准的神经网络的损失函数Loss;Step 3, construct the loss function Loss of the neural network used for fine registration of SAR images;
3a)构造用于校正SAR图像间的整体形变的子卷积神经网络损失函数lossglo:3a) Construct a sub-convolutional neural network loss function loss glo for correcting the overall deformation between SAR images:
(3a1)将待配准图像与参考图像y输入到子卷积神经网络,得到第一校正图像x,根据第一校正图像x和参考图像y的像素相似性,以1-范数作为像素相似性衡量标准构造||x-y||1函数,函数值越大,则像素相似性越差;(3a1) Input the image to be registered and the reference image y into the sub-convolutional neural network to obtain the first corrected image x, and use the 1-norm as the pixel similarity according to the pixel similarity between the first corrected image x and the reference image y The sex measurement standard constructs the ||xy|| 1 function, the larger the function value, the worse the pixel similarity;
(3a2)根据第一校正图像x和参考图像y的结构相似性,选择结构相似性函数SSIM(·)作为结构相似性衡量标准构造(1-SSIM(x,y))函数,函数值越大,则结构相似性越差;(3a2) According to the structural similarity between the first corrected image x and the reference image y, select the structural similarity function SSIM( ) as the structural similarity measure to construct a (1-SSIM(x,y)) function, the larger the function value is , the structural similarity is worse;
(3a3)将(3a1)与(3a2)中构造的函数相加,得到损失函数:(3a3) Add the functions constructed in (3a1) and (3a2) to get the loss function:
lossglo=||x-y||1+(1-SSIM(x,y))loss glo =||xy|| 1 +(1-SSIM(x,y))
其中,||·||1表示1-范数,SSIM(·)是结构相似性函数;where ||·|| 1 represents the 1-norm, and SSIM(·) is the structural similarity function;
3b)构造用于校正SAR图像间的局部形变的子残差神经网络的损失函数lossloc:3b) Construct the loss function loss loc of the sub-residual neural network for correcting the local deformation between SAR images:
(3b1)将第一校正图像x输入到子残差神经网络,得到第二校正图像x1,根据第二校正图像x1和参考图像y的像素相似性,以1-范数作为像素相似性衡量标准构造||x1-y||1,函数值越大,则像素相似性越差;(3b1) Input the first corrected image x into the sub-residual neural network to obtain the second corrected image x 1 , and use the 1-norm as the pixel similarity according to the pixel similarity between the second corrected image x 1 and the reference image y Metric construction ||x 1 -y|| 1 , the larger the function value, the worse the pixel similarity;
(3b2)根据第二校正图像x1和参考图像y的结构相似性,选择结构相似性函数SSIM(·)作为结构相似性衡量标准构造(1-SSIM(x1,y))函数,函数值越大,则结构相似性越差;(3b2) According to the structural similarity between the second corrected image x 1 and the reference image y, select the structural similarity function SSIM( ) as the structural similarity measure to construct a (1-SSIM(x 1 , y)) function, the function value The larger the value, the worse the structural similarity;
(3b3)根据残差神经网络输出的偏移量需要具有空间平滑性,空间平滑性体现在水平梯度与垂直梯度上,故构造函数作为空间平滑性的衡量标准,函数值越大,则空间平滑度越差;(3b3) According to the offset of the residual neural network output, it needs to have spatial smoothness, and the spatial smoothness is reflected in the horizontal gradient and the vertical gradient, so the constructor function As a measure of spatial smoothness, the larger the function value, the worse the spatial smoothness;
(3b4)将(3b1)、(3b2)与(3b3)中构造的函数相加,得到损失函数:(3b4) Add the functions constructed in (3b1), (3b2) and (3b3) to get the loss function:
其中,和分别表示沿水平和沿垂直方向的偏导数。in, and are the partial derivatives along the horizontal and vertical directions, respectively.
3c)将3a)中的用于校正SAR图像间的整体形变的子卷积神经网络损失函数lossglo和3b)中的用于校正SAR图像间的整体形变的子残差神经网络的损失函数lossloc相加,得到用于SAR图像精细配准的神经网络的损失函数Loss:3c) Combine the sub-convolutional neural network loss function loss glo in 3a) for correcting the overall deformation between SAR images with the loss function loss of the sub-residual neural network in 3b) for correcting the overall deformation between SAR images loc is added to obtain the loss function Loss of the neural network used for fine registration of SAR images:
Loss=lossglo+lossloc。Loss=loss glo + loss loc .
步骤四,利用(1)中的数据集Φ,训练(2)中构建的神经网络,得到训练好的网络模型。Step 4: Using the data set Φ in (1), train the neural network constructed in (2) to obtain a trained network model.
4a)从数据集Φ中随机抽取两张SAR图像,一张作待配准图像,一张作为参考图像,并将两张图像输入到用于SAR图像精细配准的神经网络中;4a) Randomly extract two SAR images from the dataset Φ, one as the image to be registered and one as the reference image, and input the two images into the neural network for fine registration of SAR images;
4b)采用后向传播算法更新用于SAR图像精细配准的神经网络的权值;4b) Using the back propagation algorithm to update the weights of the neural network for fine registration of SAR images;
4c)设置学习率为0.0001,重复步骤4a)和4b),直至SAR图像精细配准的神经网络的损失函数Loss收敛,得到训练好的SAR图像精细配准的神经网络模型;4c) Set the learning rate to 0.0001, and repeat steps 4a) and 4b) until the loss function Loss of the neural network for fine registration of SAR images converges, and the trained neural network model for fine registration of SAR images is obtained;
步骤五,输入待配准的SAR图像和作为参考的SAR图像输入到训练好的网络模型中,得到配准好的SAR图像。Step 5: Input the SAR image to be registered and the SAR image used as a reference into the trained network model to obtain a registered SAR image.
本发明效果可通过以下仿真进一步说明:The effect of the present invention can be further illustrated by the following simulation:
一.仿真条件:1. Simulation conditions:
从数据集Φ中,随机抽取两幅图像如2(a)和2(b),并将2(a)作为参考的SAR图像,2(b)作为待配准的SAR图像;From the dataset Φ, randomly extract two images such as 2(a) and 2(b), and use 2(a) as the reference SAR image and 2(b) as the SAR image to be registered;
二.仿真过程:2. Simulation process:
第一步,采用传统SAR-SIFT配准方法对2(a)和2(b)进行配准,其配准结果如图2(c)所示。In the first step, 2(a) and 2(b) are registered using the traditional SAR-SIFT registration method, and the registration result is shown in Fig. 2(c).
第二步,用本发明的神经网络进行图像配准,其配准结果如图2(d)所示。In the second step, the neural network of the present invention is used for image registration, and the registration result is shown in Figure 2(d).
第三步,将图2(a)与图2(c)两者对应像素值相减并取绝对值,结果如图2(e)。In the third step, the corresponding pixel values in Figure 2(a) and Figure 2(c) are subtracted and the absolute value is obtained, and the result is shown in Figure 2(e).
第四步,将图2(a)和图2(d)两者对应像素值相减并取绝对值,结果如图2(f)。The fourth step is to subtract the corresponding pixel values of Fig. 2(a) and Fig. 2(d) and take the absolute value, and the result is shown in Fig. 2(f).
从图2(e)和图2(f)两幅图像可以看到,本发明的基于深度学习的SAR图像精细配准方法,能够校正SAR图像间的整体形变和局部畸变,提高配准性能。It can be seen from the two images in Figure 2(e) and Figure 2(f) that the deep learning-based fine registration method for SAR images of the present invention can correct the overall deformation and local distortion between SAR images and improve registration performance.
以上描述仅是本发明的一个具体事例,并未构成对本发明的任何限制,显然对于本领域的专业人员来说,在了解本发明内容和原理后,都可能在不背离本发明原理、结构的情况下,进行形式和细节上的各种修改和改变,但是这些基于本发明思想的修正和改变仍在本发明的权利要求保护范围之内。The above description is only a specific example of the present invention, and does not constitute any limitation to the present invention. Obviously, for those skilled in the art, after understanding the content and principles of the present invention, they may not deviate from the principles and structures of the present invention. Under certain circumstances, various modifications and changes in form and details are made, but these modifications and changes based on the idea of the present invention are still within the protection scope of the claims of the present invention.
Claims (5)
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201910943154.7A CN110728706B (en) | 2019-09-30 | 2019-09-30 | SAR image fine registration method based on deep learning |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201910943154.7A CN110728706B (en) | 2019-09-30 | 2019-09-30 | SAR image fine registration method based on deep learning |
Publications (2)
Publication Number | Publication Date |
---|---|
CN110728706A CN110728706A (en) | 2020-01-24 |
CN110728706B true CN110728706B (en) | 2021-07-06 |
Family
ID=69218699
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201910943154.7A Active CN110728706B (en) | 2019-09-30 | 2019-09-30 | SAR image fine registration method based on deep learning |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN110728706B (en) |
Families Citing this family (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN112801110B (en) * | 2021-02-01 | 2022-11-01 | 中车青岛四方车辆研究所有限公司 | Target detection method and device for image distortion correction of linear array camera of rail train |
CN113034453B (en) * | 2021-03-16 | 2023-01-10 | 深圳先进技术研究院 | A breast image registration method based on deep learning |
CN113506233B (en) * | 2021-07-08 | 2024-04-19 | 西安电子科技大学 | SAR self-focusing method based on deep learning |
CN113469985A (en) * | 2021-07-13 | 2021-10-01 | 中国科学院深圳先进技术研究院 | Method for extracting characteristic points of endoscope image |
CN116402806B (en) * | 2023-04-26 | 2023-11-14 | 杭州瑞普基因科技有限公司 | Three-dimensional reconstruction method and system based on tissue slice immunohistochemical image |
CN117710711B (en) * | 2024-02-06 | 2024-05-10 | 东华理工大学南昌校区 | Optical and SAR image matching method based on lightweight depth convolution network |
Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN107886508A (en) * | 2017-11-23 | 2018-04-06 | 上海联影医疗科技有限公司 | Difference subtracts image method and medical image processing method and system |
WO2018125014A1 (en) * | 2016-12-26 | 2018-07-05 | Argosai Teknoloji Anonim Sirketi | A method for foreign object debris detection |
CN109146937A (en) * | 2018-08-22 | 2019-01-04 | 广东电网有限责任公司 | A kind of electric inspection process image dense Stereo Matching method based on deep learning |
CN109461180A (en) * | 2018-09-25 | 2019-03-12 | 北京理工大学 | A kind of method for reconstructing three-dimensional scene based on deep learning |
CN110197503A (en) * | 2019-05-14 | 2019-09-03 | 北方夜视技术股份有限公司 | Non-rigid point set method for registering based on enhanced affine transformation |
-
2019
- 2019-09-30 CN CN201910943154.7A patent/CN110728706B/en active Active
Patent Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
WO2018125014A1 (en) * | 2016-12-26 | 2018-07-05 | Argosai Teknoloji Anonim Sirketi | A method for foreign object debris detection |
CN107886508A (en) * | 2017-11-23 | 2018-04-06 | 上海联影医疗科技有限公司 | Difference subtracts image method and medical image processing method and system |
CN109146937A (en) * | 2018-08-22 | 2019-01-04 | 广东电网有限责任公司 | A kind of electric inspection process image dense Stereo Matching method based on deep learning |
CN109461180A (en) * | 2018-09-25 | 2019-03-12 | 北京理工大学 | A kind of method for reconstructing three-dimensional scene based on deep learning |
CN110197503A (en) * | 2019-05-14 | 2019-09-03 | 北方夜视技术股份有限公司 | Non-rigid point set method for registering based on enhanced affine transformation |
Non-Patent Citations (2)
Title |
---|
Multi-Temporal Remote Sensing Image Registration Using Deep Convolutional Features;ZHUOQIAN YANG等;《IEEE Access》;20180730;第38544-38553页 * |
机载合成孔径雷达海岸带图像匹配优化仿真;刘雄飞等;《计算机仿真》;20180630;第35卷(第6期);第9-12页 * |
Also Published As
Publication number | Publication date |
---|---|
CN110728706A (en) | 2020-01-24 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN110728706B (en) | SAR image fine registration method based on deep learning | |
CN110415199B (en) | Multispectral remote sensing image fusion method and device based on residual learning | |
CN108510532B (en) | Optical and SAR image registration method based on deep convolution GAN | |
CN108564606B (en) | Heterogeneous image block matching method based on image conversion | |
CN110827332B (en) | A Registration Method of SAR Image Based on Convolutional Neural Network | |
CN109255781B (en) | Object-oriented multispectral high-resolution remote sensing image change detection method | |
CN111260576A (en) | Hyperspectral unmixing algorithm based on de-noising three-dimensional convolution self-coding network | |
WO2023123568A1 (en) | Ground penetrating radar image artificial intelligence recognition method and device | |
CN108257154B (en) | Polarimetric SAR image change detection method based on regional information and CNN | |
CN110969088A (en) | Remote sensing image change detection method based on significance detection and depth twin neural network | |
CN107194936B (en) | Hyperspectral image target detection method based on superpixel combined sparse representation | |
CN111062267B (en) | A Dimensionality Reduction Method for Time Series Remote Sensing Images | |
CN110619373B (en) | Infrared multispectral weak target detection method based on BP neural network | |
CN113902646A (en) | Remote sensing image pan-sharpening method based on deep and shallow feature weighted fusion network | |
CN103679193A (en) | FREAK-based high-speed high-density packaging component rapid location method | |
CN110647909A (en) | Remote sensing image classification method based on three-dimensional dense convolution neural network | |
CN113065467A (en) | Satellite image low-coherence region identification method and device based on deep learning | |
CN113344103A (en) | Hyperspectral remote sensing image ground object classification method based on hypergraph convolution neural network | |
WO2024061050A1 (en) | Remote-sensing sample labeling method based on geoscientific information and active learning | |
CN115861546B (en) | Crop geometric perception and three-dimensional phenotype reconstruction method based on nerve volume rendering | |
CN117368877A (en) | Radar image clutter suppression and target detection method based on generation countermeasure learning | |
CN108921884A (en) | Based on the optics and SAR Image registration method, equipment and storage medium for improving SIFT | |
CN107610155A (en) | SAR remote sensing imagery change detection method and devices | |
CN112613371A (en) | Hyperspectral image road extraction method based on dense connection convolution neural network | |
CN118640878B (en) | Topography mapping method based on aviation mapping technology |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |