CN113838104B - Registration method based on multispectral and multimodal image consistency enhancement network - Google Patents
Registration method based on multispectral and multimodal image consistency enhancement network Download PDFInfo
- Publication number
- CN113838104B CN113838104B CN202110890638.7A CN202110890638A CN113838104B CN 113838104 B CN113838104 B CN 113838104B CN 202110890638 A CN202110890638 A CN 202110890638A CN 113838104 B CN113838104 B CN 113838104B
- Authority
- CN
- China
- Prior art keywords
- image
- consistency
- network
- phase
- multispectral
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active
Links
- 238000000034 method Methods 0.000 title claims abstract description 28
- 238000012549 training Methods 0.000 claims abstract description 23
- 230000002708 enhancing effect Effects 0.000 claims abstract description 14
- 238000012360 testing method Methods 0.000 claims abstract description 10
- 238000012545 processing Methods 0.000 claims abstract description 5
- 230000006870 function Effects 0.000 claims description 15
- 238000001228 spectrum Methods 0.000 claims description 12
- 238000013507 mapping Methods 0.000 claims description 9
- 230000003595 spectral effect Effects 0.000 claims description 6
- 238000004364 calculation method Methods 0.000 claims description 4
- 239000011159 matrix material Substances 0.000 claims description 4
- 239000012633 leachable Substances 0.000 claims description 3
- 238000004321 preservation Methods 0.000 claims description 3
- 238000003672 processing method Methods 0.000 claims description 3
- PXFBZOLANLWPMH-UHFFFAOYSA-N 16-Epiaffinine Natural products C1C(C2=CC=CC=C2N2)=C2C(=O)CC2C(=CC)CN(C)C1C2CO PXFBZOLANLWPMH-UHFFFAOYSA-N 0.000 claims description 2
- 238000011524 similarity measure Methods 0.000 claims description 2
- 230000009466 transformation Effects 0.000 claims description 2
- 230000000694 effects Effects 0.000 abstract description 6
- 230000009286 beneficial effect Effects 0.000 description 1
- 238000000605 extraction Methods 0.000 description 1
- 239000007787 solid Substances 0.000 description 1
- 238000013519 translation Methods 0.000 description 1
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/30—Determination of transform parameters for the alignment of images, i.e. image registration
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/21—Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
- G06F18/214—Generating training patterns; Bootstrap methods, e.g. bagging or boosting
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/04—Architecture, e.g. interconnection topology
- G06N3/045—Combinations of networks
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T5/00—Image enhancement or restoration
- G06T5/90—Dynamic range modification of images or parts thereof
- G06T5/94—Dynamic range modification of images or parts thereof based on local image properties, e.g. for local contrast enhancement
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/20—Special algorithmic details
- G06T2207/20081—Training; Learning
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/20—Special algorithmic details
- G06T2207/20084—Artificial neural networks [ANN]
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Data Mining & Analysis (AREA)
- General Engineering & Computer Science (AREA)
- Life Sciences & Earth Sciences (AREA)
- Artificial Intelligence (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Evolutionary Computation (AREA)
- Bioinformatics & Computational Biology (AREA)
- Evolutionary Biology (AREA)
- Bioinformatics & Cheminformatics (AREA)
- Health & Medical Sciences (AREA)
- Biomedical Technology (AREA)
- Biophysics (AREA)
- Computational Linguistics (AREA)
- General Health & Medical Sciences (AREA)
- Molecular Biology (AREA)
- Computing Systems (AREA)
- Mathematical Physics (AREA)
- Software Systems (AREA)
- Image Analysis (AREA)
Abstract
The invention discloses a registration method based on a multispectral and multimodal image consistency enhancement network. The method comprises the following steps: firstly, training an image consistency enhancement network, wherein the training method comprises the following steps: acquiring a plurality of training image pairs; respectively inputting the training images into a multispectral and multimodal image consistency enhancement network, outputting images after forward propagation processing, and calculating a consistency enhancement loss function; updating network parameters of the image consistency enhancement network by using the loss function until the network parameters meet the preset conditions, and obtaining a trained image consistency enhancement network after training is finished; and then enhancing the test image through an image consistency enhancing network, and registering by using an image registration algorithm based on multi-scale motion estimation. The image consistency enhancement network can effectively extract the consistency of multispectral and multimodal images, so that the image consistency enhancement network has excellent effects in the following registration method.
Description
Technical Field
The invention relates to the technical field of image processing, in particular to a registration method based on a multispectral and multimodal image consistency enhancement network.
Background
Multispectral and multimodal images are important data in the fields of computer vision and computational photography. Image registration is important because the multispectral and multimodal data are often not aligned because of the translation or movement of the image device. Because of the non-linear brightness and gradient characteristics of multispectral and multimodal images, conventional image registration techniques often fail to achieve good results. There is therefore a need for a solution that improves image consistency to address the challenges presented by non-linear variations to image registration.
Currently, the mainstream consistency enhancement algorithms are LAT (local area transform), EI (entropy image), CT (Census transform) and the like.
Statistical information of local regions utilized by LAT, and intensity variation of imaginary local regions follow a functional relationship
Where τ (x, y) =1 when x=y, otherwise 0. Such algorithms rely heavily on the abundance of intensity levels of the input image, with poor results on pictures with less intensity fluctuations.
EI is a structure that is consistent using shannon entropy, which is defined by the following formula
Where X is a discrete random variable, the possible values are { X } 1 ,...,x n Probability distribution P (X). In the local region, the EI takes a local histogram to calculate probabilities of different gray levels, thereby calculating its entropy to represent its structural information.
CT encodes a uniform structure by a fixed pattern. The most common CT modes are defined by the following formula
CT typically operates on a 3x3 window, resulting in 8-channel per pixel characteristics as a similarity enhancement structure, which reduces gray scale and gradient non-linear variations.
However, the above algorithms all extract the features of the picture based on statistical assumptions, and since the proposed assumptions often have no solid theoretical basis, stable effects cannot be obtained on multispectral and multimodal datasets.
Disclosure of Invention
The invention aims to provide a registration method based on a multispectral and multimode image consistency enhancement network, wherein the image consistency enhancement network can effectively extract the consistency of multispectral and multimode images, so that the registration method has excellent effects in the following registration method.
The invention is realized by adopting the following technical scheme:
a registration method based on multispectral and multimode image consistency enhancement network comprises the following steps:
(1) A plurality of training image pairs is acquired. Each training image pair comprises a first image of a certain band or modality and a second image of another band or modality;
(2) Respectively inputting the training image pairs into an image consistency enhancement network, outputting images after forward propagation processing, and calculating a consistency enhancement module loss function;
(3) Updating network parameters of the image consistency enhancement network by using the loss function until the network parameters meet a preset condition (namely, the loss function is lower than a threshold value or the iteration number reaches an upper limit), and finishing training to obtain a trained image consistency enhancement network;
(4) And acquiring a plurality of test image pairs, enhancing the test image pairs through an image consistency enhancement network, and registering by using an image registration algorithm based on multi-scale motion estimation. The invention can obtain excellent registration results.
In the above technical solution, further, the steps of acquiring the plurality of training image pairs and the test image pairs are as follows: samples with different spectral bands or different modes are collected, a certain spectral band or mode image is selected as a first image, another spectral band or mode image is selected as a second image, one image is taken as a reference image, and the other image is taken as an image to be registered. And cutting out sub-images with preset sizes from the same positions of the reference image and the image to be registered respectively, and forming a training image pair or a test image pair.
Further, the image consistency enhancing network comprises two modules: a low brightness enhancement module and a consistency enhancement module.
The low-brightness enhancement module is used for enhancing the brightness value of the low-brightness area of the image and balancing the overall brightness of the image, so that the structural information extracted by the consistency enhancement module is more abundant; the consistency enhancing module is used for enhancing the similarity of the structures and relieving the nonlinear intensity and gradient change of the multispectral and multimodal images.
The processing method of the low-brightness enhancement module image comprises the following steps: the image is output N nonlinear mapping parameters after passing through a lightweight convolution network, the image is mapped for N times iteratively to obtain a low-brightness enhanced image, and the mapping mode is calculated by the following formula
I out =I in +αI in (1-I in ) (1)
Wherein I is in ,I out The method comprises the steps of respectively inputting an image and outputting an image for each iteration, wherein alpha is a nonlinear mapping parameter.
The consistency enhancing module consists of an improved leachable Gabor core and a phase consistency structure. The improved learnable Gabor kernel is obtained by multiplying a Gabor filter with a learnable convolution kernel. Gabor filter is defined by the following formula
Wherein k is u,v The wave vector for Gabor, subscripts u, v are the abscissa of the image, respectively, σ is the standard deviation of the gaussian function, and is related to the bandwidth of the wavelet, where the constant is taken and z is the substituted value of the image.
The improved learnable Gabor core can extract multi-scale, multi-directional and odd-even frequency components, and the difficulty of network training is obviously reduced while the network universality is improved.
Generating frequency component channels with multiple scales, multiple directions and odd and even after the image is subjected to improved leachable Gabor kernel, and calculating the amplitude spectrum and the phase spectrum of the frequency component channels with the same scale by the following formulas respectively
φ s (x,y)=arctan(o s (x,y)/e s (x,y)) (4)
Wherein o is s (x, y) is the channel of the (x, y) point odd frequency component of the image, e s (x, y) is the even frequency component channel and s is the scale. The local energy of the magnitude spectrum and the corresponding phase spectrum are calculated by the following formulas respectively
The phase consistency structure takes the phase consistency theory as guidance, establishes a phase consistency system structure, and adopts 3 trainable layers to extract trainable features. The phase consistency of the image is initially calculated by adopting the following formula
In which the bold variables are vectorised representations, e.g. phi s Is phi s The vectorized representation of (x, y),representing matrix dot product, ++>Representing the matrix dot division, ζ.1 is a small amount, avoiding zero operation. ΔΦ of s Calculated by the following formula
Wherein phi is s The phase spectrum of the image is represented,representing the average phase spectrum of the image.
Further, the phase consistency structure includes three trainable layers, respectively: noise estimation layer, improved phase deviation estimation layer, frequency abundance estimation layer.
The calculation of equation (7) is very sensitive to noise and the noise estimation layer is used to estimate the image noise. The noise of the local energy will have a Rayleigh distribution, so the noise distribution is calculated by the following formula
Wherein N is s Representing the number of frequency scales, τ can be derived from the local energy, α is a variable learning quantity. In the formula (7)The term minus the noise term T is then activated by the ReLU function to remove the noise-swabbed regions.
The improved phase deviation estimation layer introduces a leavable amount beta to control the significance of the output structure, and the improved phase deviation delta phi' s Calculated by the following formula
The calculation of equation (7) does not take into account the number of scales of the frequency of phase coincidence, the more the number of scales, the higher the weight of the calculated coincidence should be, the weight is calculated by the following equation
Where γ is a learnable amount and S represents the sum of the number of scales.
The final phase consistency is calculated from the following formula
Wherein the subscript s denotes the scale, the subscript o denotes the direction,repeat expansion to and +.>Results after the same dimension.
Further, the consistency enhancement module loss function is calculated from the following formula
Wherein SSIM is a structural similarity measure, N o Is the sum of the number of directions, P 1 、P 2 Respectively a reference image and an image to be registered,where l e { x, y } represents the gradient in both the transverse and longitudinal directions, c is the hyper-parameter used to balance consistency enhancement with structure preservation.
Further, the image registration method based on multi-scale motion estimation carries out registration, specifically: using Sum of Squares (SSD) as a registration result measure, a calculation is made from the following equation
Wherein I is R Representing as reference image I F Representing the image to be registered, Ω (a) representing both weightsThe active area of the stack, a denotes the parameters of the affine transformation,for optimal values, p is an element in the active area where the two overlap. The multi-scale motion estimation-based hierarchical motion parameter estimation is a Gaussian image pyramid.
The beneficial effects of the invention are as follows:
the registration method based on the multispectral and multimode image consistency enhancement network provides and realizes the image consistency enhancement network, can effectively reduce the influence of nonlinear intensity and gradient change of the multispectral and multimode images on registration, and combines the proposed image registration algorithm based on layered motion estimation to obtain an excellent registration effect.
Drawings
FIG. 1 is a flow chart of image enhancement and registration using a registration method of a multispectral and multimodal image consistency enhancement network.
Fig. 2 is a low brightness enhancement module in an image consistency enhancement network.
Fig. 3 is a consistency enhancement module in an image consistency enhancement network.
Fig. 4 shows the result of enhancement of an image by an image consistency enhancement network, (a) multispectral image 1, (b) enhancement of multispectral image 1, (c) multispectral image 2, and (d) enhancement of multispectral image 2.
Detailed Description
The following describes the embodiments of the present invention further with reference to the accompanying drawings.
As shown in fig. 1, the training method and registration of the present invention includes the following steps:
(1) A plurality of training image pairs is acquired. Each training image pair comprises a first image of a certain band or modality and a second image of another band or modality;
(2) Respectively inputting the training image pairs into an image consistency enhancement network, outputting images after forward propagation processing, and calculating a consistency enhancement module loss function;
(3) Updating network parameters of the image consistency enhancement network by using the loss function until the network parameters meet the preset conditions, and obtaining a trained image consistency enhancement network after training is finished;
(4) And acquiring a plurality of test image pairs, and enhancing the first image and the second image through an image consistency enhancing network. And registering by using an image registration method based on multi-scale motion estimation. The invention can obtain excellent registration results.
As shown in fig. 2, the processing method of the low-brightness enhancement module for the image is as follows: the image is subjected to a lightweight convolution network and then is output with N nonlinear mapping parameters, and the image is iteratively mapped for N times to obtain a low-brightness enhanced image, wherein N is preferably 8. The mapping mode is calculated by
I out =I in +αI in (1-I in )
Wherein I is in ,I out The method comprises the steps of respectively inputting an image and outputting an image for each iteration, wherein alpha is a nonlinear mapping parameter.
A consistency enhancement module is shown in fig. 3, which is composed of two parts, a modified learnable Gabor core and a phase consistency structure. The improved learnable Gabor kernel is obtained by multiplying a Gabor filter with a learnable convolution kernel. The Gabor filter is defined by the following formula:
wherein k is u,v The wave vector for Gabor, subscripts u, v are the abscissa of the image, respectively, σ is the standard deviation of the gaussian function, and is related to the bandwidth of the wavelet, where the constant is taken and z is the substituted value of the image.
Preferably, the invention takes the number of directions N of the multidirectional Gabor filter o 6.
The images pass through the Gabor kernel to generate frequency component channels with multiple dimensions, multiple directions and odd and even.
The phase consistency structure takes the phase consistency theory as guidance, a phase consistency system structure is established, 3 trainable layers are adopted for trainable feature extraction, and the 3 trainable layers are respectively: noise estimation layer, improved phase deviation estimation layer, frequency abundance estimation layer. The final phase consistency is calculated by:
wherein the subscript s denotes a scale, o denotes a direction,repeat expansion to and +.>Results after the same dimension.
The loss function of the consistency enhancing network is calculated by:
wherein the method comprises the steps ofWhere l e { x, y } represents the gradient in both the transverse and longitudinal directions, c is the hyper-parameter used to balance consistency enhancement with structure preservation. Preferably, c is 0.8.
The change in the enhanced image compared to the original is shown in fig. 4. From the figure, the obtained consistency structure is quite similar to the original image after the low brightness enhancement and consistency enhancement of the two images with larger brightness difference. The final registration results on the multispectral dataset are shown in table 1. Table 1 is the effect of image enhancement and registration on a multispectral dataset compared to the mainstream algorithm using an image consistency enhancement network and a hierarchical motion estimation based image registration method. The error of the network with only the consistency enhancement module is obviously reduced compared with that of the traditional consistency enhancement algorithm, after the low-brightness enhancement module is added, a plurality of values are improved compared with that of the network without the low-brightness enhancement module, and the combination of the two values has positive influence on the registration of the images.
Table 1 registration effect of algorithms on images
The above description is only of embodiments of the present invention and should not be construed as limiting the scope of the present invention, and equivalent changes, which are known to those skilled in the art based on the present invention, should be construed as falling within the scope of the present invention.
Claims (6)
1. The registration method based on the multispectral and multimodal image consistency enhancement network is characterized by comprising the following steps of:
(1) Acquiring a plurality of training image pairs; each training image pair comprises a first image of a certain band or modality and a second image of another band or modality;
(2) Respectively inputting the training image pairs into an image consistency enhancement network, outputting images after forward propagation processing, and calculating a consistency enhancement module loss function; the image consistency enhancement network comprises a low-brightness enhancement module and a consistency enhancement module, wherein the low-brightness enhancement module is used for improving the brightness value of a low-brightness area of an image and balancing the overall brightness of the image; the consistency enhancing module is used for enhancing the similarity of the structures and relieving the nonlinear intensity and gradient change of the multispectral and multimodal images;
(3) Updating network parameters of the image consistency enhancement network by using the loss function until the network parameters meet the preset conditions, and obtaining a trained image consistency enhancement network after training is finished;
(4) Acquiring a plurality of test image pairs, enhancing the test image pairs through an image consistency enhancement network, and registering by using an image registration algorithm based on multi-scale motion estimation;
the image registration method based on multi-scale motion estimation carries out registration, and specifically comprises the following steps: using the square error and SSD as registration result metrics, a calculation is made from the following equation
Wherein I is R Representing as reference image I F Representing the image to be registered, Ω (a) representing the effective area where the two overlap, a representing the parameters of the affine transformation,is an optimal value; the multi-scale motion estimation-based hierarchical motion parameter estimation is a Gaussian image pyramid.
2. The registration method based on a multispectral and multimodal image consistency enhancement network of claim 1, wherein the step of acquiring a plurality of training image pairs and test image pairs is as follows:
collecting samples of different spectral bands or different modes, selecting one spectral band or mode image as a first image, selecting another spectral band or mode image as a second image, and taking one image as a reference image and the other image as an image to be registered; and cutting out sub-images with preset sizes from the same positions of the reference image and the image to be registered respectively, and forming a training image pair or a test image pair.
3. The registration method based on the multispectral and multimodal image consistency enhancement network according to claim 1, wherein the processing method of the image by the low-brightness enhancement module is as follows:
the image is output N nonlinear mapping parameters after passing through a lightweight convolution network, the image is mapped for N times iteratively to obtain a low-brightness enhanced image, and the mapping mode is calculated by the following formula
I out =I in +αI in (1-I in ) (1)
Wherein I is in ,I out Respectively for each round of overlappingAnd the generation of input images and output images, wherein alpha is a nonlinear mapping parameter.
4. The registration method based on multispectral and multimodal image consistency enhancement network according to claim 1, wherein the consistency enhancement module comprises a modified learnable Gabor kernel and phase consistency structure; the improved learnable Gabor kernel is obtained by multiplying a learnable convolution kernel phase point by a Gabor filter; gabor filter is defined by the following formula
Wherein k is u,v The wave vector is Gabor, subscripts u and v are respectively the abscissa and the ordinate of the image, sigma is the standard deviation of a Gaussian function and is related to the bandwidth of the wavelet, a constant is taken here, and z is the substituted value of the image;
generating frequency component channels with multiple scales, multiple directions and odd and even after the image is subjected to improved leachable Gabor kernel, and calculating the amplitude spectrum and the phase spectrum of the frequency component channels with the same scale by the following formulas respectively
φ s (x,y)=arctan(o s (x,y)/e s (x,y)) (4)
Wherein o is s (x, y) is the channel of the (x, y) point odd frequency component of the image, e s (x, y) is an even frequency component channel, s is a scale;
the local energy of the magnitude spectrum and the corresponding phase spectrum are calculated by the following formulas respectively
The phase consistency structure calculates the phase consistency of the image according to formula (7)
Wherein the bold variables are vectorized representations,representing matrix dot product, ++>Representing the matrix dot division, ζ.1 is a small amount, avoiding zero operation; phase difference delta phi s Calculated by the following formula
Wherein phi is s The phase spectrum of the image is represented,representing the average phase spectrum of the image.
5. The registration method based on the multispectral and multimodal image consistency enhancement network according to claim 4, wherein the phase consistency structure comprises three trainable layers, namely: a noise estimation layer, an improved phase deviation estimation layer, and a frequency abundance estimation layer;
the noise estimation layer is used for estimating image noise, and the noise of local energy has Rayleigh distribution, so the noise distribution is calculated by the following formula
Wherein N is s Representing the number of frequency scales, τ can be derived from the local energy, α is a variable learning quantity; in the formula (7)The term minus the noise term T is activated by a ReLU function to remove the area submerged by noise;
the improved phase deviation estimation layer introduces a leavable amount beta to control the significance of the image output structure, and the improved phase deviation delta phi 'is provided' s Calculated by the following formula
Considering the scale number of the frequency of phase coincidence, calculating the weight of phase coincidence, the weight W is calculated by the following formula
Wherein gamma is a learnable amount, S represents the sum of the number of scales;
the final phase consistency is calculated from the following formula
Where subscript s denotes the scale, subscript o denotes the direction,repeat expansion to and +.>Results after the same dimension.
6. The registration method based on multispectral and multimodal image consistency enhancement network as claimed in claim 1, wherein said consistency enhancement module loss function is calculated by the following formula
Wherein SSIM is a structural similarity measure, N o Is the sum of the number of directions, P 1 、P 2 Respectively a reference image and an image to be registered,where l e { x, y } represents the gradient in both the transverse and longitudinal directions, c is the hyper-parameter used to balance consistency enhancement with structure preservation.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202110890638.7A CN113838104B (en) | 2021-08-04 | 2021-08-04 | Registration method based on multispectral and multimodal image consistency enhancement network |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202110890638.7A CN113838104B (en) | 2021-08-04 | 2021-08-04 | Registration method based on multispectral and multimodal image consistency enhancement network |
Publications (2)
Publication Number | Publication Date |
---|---|
CN113838104A CN113838104A (en) | 2021-12-24 |
CN113838104B true CN113838104B (en) | 2023-10-27 |
Family
ID=78963181
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202110890638.7A Active CN113838104B (en) | 2021-08-04 | 2021-08-04 | Registration method based on multispectral and multimodal image consistency enhancement network |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN113838104B (en) |
Families Citing this family (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN117670753B (en) * | 2024-01-30 | 2024-06-18 | 浙江大学金华研究院 | Infrared image enhancement method based on depth multi-brightness mapping non-supervision fusion network |
CN118426736B (en) * | 2024-07-03 | 2024-09-20 | 河海大学 | Bionic compound eye type multispectral target detection system and method for severe environment |
Citations (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN107833280A (en) * | 2017-11-09 | 2018-03-23 | 交通运输部天津水运工程科学研究所 | A kind of outdoor moving augmented reality method being combined based on geographic grid with image recognition |
RU2706891C1 (en) * | 2019-06-06 | 2019-11-21 | Самсунг Электроникс Ко., Лтд. | Method of generating a common loss function for training a convolutional neural network for converting an image into an image with drawn parts and a system for converting an image into an image with drawn parts |
CN110533620A (en) * | 2019-07-19 | 2019-12-03 | 西安电子科技大学 | The EO-1 hyperion and panchromatic image fusion method of space characteristics are extracted based on AAE |
CN111105432A (en) * | 2019-12-24 | 2020-05-05 | 中国科学技术大学 | Unsupervised end-to-end driving environment perception method based on deep learning |
CN112288663A (en) * | 2020-09-24 | 2021-01-29 | 山东师范大学 | Infrared and visible light image fusion method and system |
CN112330724A (en) * | 2020-10-15 | 2021-02-05 | 贵州大学 | Unsupervised multi-modal image registration method based on integrated attention enhancement |
Family Cites Families (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US11158069B2 (en) * | 2018-12-11 | 2021-10-26 | Siemens Healthcare Gmbh | Unsupervised deformable registration for multi-modal images |
-
2021
- 2021-08-04 CN CN202110890638.7A patent/CN113838104B/en active Active
Patent Citations (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN107833280A (en) * | 2017-11-09 | 2018-03-23 | 交通运输部天津水运工程科学研究所 | A kind of outdoor moving augmented reality method being combined based on geographic grid with image recognition |
RU2706891C1 (en) * | 2019-06-06 | 2019-11-21 | Самсунг Электроникс Ко., Лтд. | Method of generating a common loss function for training a convolutional neural network for converting an image into an image with drawn parts and a system for converting an image into an image with drawn parts |
CN110533620A (en) * | 2019-07-19 | 2019-12-03 | 西安电子科技大学 | The EO-1 hyperion and panchromatic image fusion method of space characteristics are extracted based on AAE |
CN111105432A (en) * | 2019-12-24 | 2020-05-05 | 中国科学技术大学 | Unsupervised end-to-end driving environment perception method based on deep learning |
CN112288663A (en) * | 2020-09-24 | 2021-01-29 | 山东师范大学 | Infrared and visible light image fusion method and system |
CN112330724A (en) * | 2020-10-15 | 2021-02-05 | 贵州大学 | Unsupervised multi-modal image registration method based on integrated attention enhancement |
Non-Patent Citations (2)
Title |
---|
光谱反射率重建中代表颜色分步选取方法;沈会良;张哲超;忻浩忠;;光谱学与光谱分析(第04期);全文 * |
基于多光谱图像超分辨率处理的遥感图像融合;杨超;杨斌;黄国玉;;激光与光电子学进展(第02期);全文 * |
Also Published As
Publication number | Publication date |
---|---|
CN113838104A (en) | 2021-12-24 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN109191382B (en) | Image processing method, device, electronic equipment and computer readable storage medium | |
Thakur et al. | State‐of‐art analysis of image denoising methods using convolutional neural networks | |
CN110119780B (en) | Hyper-spectral image super-resolution reconstruction method based on generation countermeasure network | |
Zhussip et al. | Training deep learning based image denoisers from undersampled measurements without ground truth and without image prior | |
CN107358293B (en) | Neural network training method and device | |
EP3716198A1 (en) | Image reconstruction method and device | |
Lin et al. | Hyperspectral image denoising via matrix factorization and deep prior regularization | |
Ren et al. | Single image super-resolution via adaptive high-dimensional non-local total variation and adaptive geometric feature | |
Chen et al. | Convolutional neural network based dem super resolution | |
CN107146228B (en) | A kind of super voxel generation method of brain magnetic resonance image based on priori knowledge | |
CN112634149B (en) | Point cloud denoising method based on graph convolution network | |
CN113838104B (en) | Registration method based on multispectral and multimodal image consistency enhancement network | |
CN109146061A (en) | The treating method and apparatus of neural network model | |
Ren et al. | Enhanced non-local total variation model and multi-directional feature prediction prior for single image super resolution | |
CN109949217B (en) | Video super-resolution reconstruction method based on residual learning and implicit motion compensation | |
Nizami et al. | Natural scene statistics model independent no-reference image quality assessment using patch based discrete cosine transform | |
CN107301631B (en) | SAR image speckle reduction method based on non-convex weighted sparse constraint | |
CN116385281A (en) | Remote sensing image denoising method based on real noise model and generated countermeasure network | |
CN114005046A (en) | Remote sensing scene classification method based on Gabor filter and covariance pooling | |
Luo et al. | A fast denoising fusion network using internal and external priors | |
CN114049491A (en) | Fingerprint segmentation model training method, fingerprint segmentation device, fingerprint segmentation equipment and fingerprint segmentation medium | |
CN117853596A (en) | Unmanned aerial vehicle remote sensing mapping method and system | |
CN114565772B (en) | Method and device for extracting set features, electronic equipment and storage medium | |
CN114022362B (en) | Image super-resolution method based on pyramid attention mechanism and symmetric network | |
CN117095217A (en) | Multi-stage comparative knowledge distillation process |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |