CN112734646B - Image super-resolution reconstruction method based on feature channel division - Google Patents
Image super-resolution reconstruction method based on feature channel division Download PDFInfo
- Publication number
- CN112734646B CN112734646B CN202110070247.0A CN202110070247A CN112734646B CN 112734646 B CN112734646 B CN 112734646B CN 202110070247 A CN202110070247 A CN 202110070247A CN 112734646 B CN112734646 B CN 112734646B
- Authority
- CN
- China
- Prior art keywords
- module
- image
- resolution
- feature
- super
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active
Links
- 238000000034 method Methods 0.000 title claims abstract description 26
- 238000012549 training Methods 0.000 claims abstract description 37
- 238000013507 mapping Methods 0.000 claims abstract description 23
- 238000000605 extraction Methods 0.000 claims abstract description 21
- 238000005070 sampling Methods 0.000 claims abstract description 5
- 230000002776 aggregation Effects 0.000 claims description 16
- 238000004220 aggregation Methods 0.000 claims description 16
- 238000012545 processing Methods 0.000 claims description 11
- 230000006870 function Effects 0.000 claims description 10
- 238000007670 refining Methods 0.000 claims description 10
- 208000037170 Delayed Emergence from Anesthesia Diseases 0.000 claims description 8
- ORILYTVJVMAKLC-UHFFFAOYSA-N Adamantane Natural products C1C(C2)CC3CC1CC2C3 ORILYTVJVMAKLC-UHFFFAOYSA-N 0.000 claims description 4
- 239000000284 extract Substances 0.000 claims description 4
- 230000008569 process Effects 0.000 claims description 2
- 238000011084 recovery Methods 0.000 abstract 1
- 238000013135 deep learning Methods 0.000 description 5
- 238000010586 diagram Methods 0.000 description 4
- 238000013528 artificial neural network Methods 0.000 description 2
- 230000000694 effects Effects 0.000 description 2
- 238000005516 engineering process Methods 0.000 description 2
- 238000005259 measurement Methods 0.000 description 2
- 238000004364 calculation method Methods 0.000 description 1
- 238000010276 construction Methods 0.000 description 1
- 238000013527 convolutional neural network Methods 0.000 description 1
- 230000007547 defect Effects 0.000 description 1
- 238000001514 detection method Methods 0.000 description 1
- 238000002059 diagnostic imaging Methods 0.000 description 1
- 238000011478 gradient descent method Methods 0.000 description 1
- 230000006872 improvement Effects 0.000 description 1
- 230000008447 perception Effects 0.000 description 1
- 238000011160 research Methods 0.000 description 1
- 238000000638 solvent extraction Methods 0.000 description 1
- 238000012360 testing method Methods 0.000 description 1
- 230000009466 transformation Effects 0.000 description 1
- 230000017105 transposition Effects 0.000 description 1
- 230000000007 visual effect Effects 0.000 description 1
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T3/00—Geometric image transformations in the plane of the image
- G06T3/40—Scaling of whole images or parts thereof, e.g. expanding or contracting
- G06T3/4053—Scaling of whole images or parts thereof, e.g. expanding or contracting based on super-resolution, i.e. the output image resolution being higher than the sensor resolution
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T3/00—Geometric image transformations in the plane of the image
- G06T3/40—Scaling of whole images or parts thereof, e.g. expanding or contracting
- G06T3/4046—Scaling of whole images or parts thereof, e.g. expanding or contracting using neural networks
Landscapes
- Engineering & Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Theoretical Computer Science (AREA)
- Artificial Intelligence (AREA)
- Evolutionary Computation (AREA)
- Image Analysis (AREA)
- Image Processing (AREA)
Abstract
The invention belongs to the technical field of computer vision, and relates to an image super-resolution reconstruction method based on feature channel division, which comprises the following steps: constructing an image super-resolution reconstruction model, wherein the image super-resolution reconstruction model comprises a feature extraction module, a nonlinear feature mapping module and a reconstruction module; the feature extraction module is used for extracting shallow features; the nonlinear feature mapping module is used for extracting deep features and sending the deep features to the reconstruction module; the reconstruction module is used for generating a residual image through sub-pixel convolution after the input deep feature extraction information and adding the residual image and an image obtained after the up-sampling of the low-resolution image by using a nearest neighbor interpolation method to obtain a high-resolution image; constructing a training set; training an image super-resolution reconstruction model; inputting the image to be processed into a trained image super-resolution reconstruction model, and outputting to obtain a reconstructed high-resolution image; the parameter and the calculated amount are effectively reduced, better recovery is carried out on the textures of the image, and the super-resolution reconstruction performance of the image is improved.
Description
Technical field:
the invention belongs to the technical field of computer vision, and relates to an image super-resolution reconstruction method based on feature channel division.
The background technology is as follows:
image super-resolution aims at converting a given low resolution image with coarse details into a corresponding high resolution image with better visual quality and fine details. Image super-resolution is an important image processing technology in computer vision, and has important applications in many other fields, such as object detection (especially small objects) in scenes, face recognition in surveillance videos, medical imaging, astronomical images.
In the research work of image super-resolution, the traditional super-resolution reconstruction method mainly depends on the construction of constraint terms and the accuracy of registration between images to achieve a reconstruction effect, but is not suitable for super-resolution reconstruction with larger magnification. The advent of deep learning methods has solved many of the bottleneck problems in conventional super-resolution techniques, and has progressed rapidly in recent years. For example, chinese patent CN201610545884.8 discloses a single image super-resolution reconstruction method combining deep learning and gradient transformation. Mainly comprises the following steps: upsampling an input low-resolution image by using a super-resolution method based on deep learning to obtain an upsampled image; gradient extraction is carried out on the up-sampled image by using a gradient operator; converting the extracted gradient by using a deep convolutional neural network; taking the input low-resolution image and the converted gradient as constraints, and establishing a reconstruction cost function; optimizing a reconstruction cost function by using a gradient descent method to obtain a final output high-resolution image; chinese patent CN201811576216.7 discloses a fast image super-resolution reconstruction method based on deep learning, which comprises the following specific steps: selecting an image training set and a testing set, and performing feature extraction, feature refinement and sub-pixel up-sampling operation of a nested network on the low-resolution image by using a deep neural network to obtain high-resolution detail residual information of the image; performing transposition convolution processing on the low-resolution image to obtain high-resolution spatial low-frequency characteristic information of the image; combining the image high-resolution detail residual information with the high-resolution space low-frequency characteristic information to obtain a high-resolution reconstruction result of image estimation; carrying out loss value measurement on a high-resolution reconstruction result of image prediction and a high-resolution image block; updating the network weight by using an Adam operator to obtain a trained network model; inputting a low-resolution image into the trained network model to obtain a high-resolution reconstructed image; CN201710301990.6 discloses a remote sensing image super-resolution reconstruction method based on a content perception deep learning network, proposes a comprehensive measurement index and a calculation method of image content complexity, classifies sample images according to content complexity, constructs and trains deep GAN network models with three different complexity, namely high, medium and low, and then selects a corresponding network for reconstruction according to the content complexity of an input image to be super-divided.
The presently disclosed algorithm mostly improves the network layer number and the parameter number to obtain a better image super-resolution effect, and does not fully extract information between features, so that a further improvement space still exists.
The invention comprises the following steps:
the invention aims to overcome the defects of the prior art, designs an image super-resolution reconstruction method based on feature channel division, which is used for fully utilizing feature information in a neural network, improving the image super-resolution reconstruction performance and better recovering textures of an image.
In order to achieve the above purpose, the image super-resolution reconstruction method based on feature channel division according to the present invention comprises the following specific steps:
s1, constructing an image super-resolution reconstruction model, wherein the image super-resolution reconstruction model comprises a feature extraction module, a nonlinear feature mapping module and a reconstruction module; the feature extraction module, the nonlinear feature mapping module and the reconstruction module are connected in series; the feature extraction module is used for extracting shallow features from the input low-resolution image to serve as the input of the nonlinear feature mapping module; the nonlinear feature mapping module is used for extracting information of the input shallow features to obtain deep features and sending the deep features to the reconstruction module; the reconstruction module is used for generating a residual image through sub-pixel convolution after the input deep feature extraction information and adding the residual image and an image obtained after the up-sampling of the low-resolution image by using a nearest neighbor interpolation method to obtain a high-resolution image;
s2, constructing a training set;
s3, training the image super-resolution reconstruction model, and training the image super-resolution reconstruction model in the step S1 by using the training set in the step S2, wherein the training comprises the steps of setting a loss function, a weight updating algorithm and a learning rate;
s4, inputting the image to be processed into a trained image super-resolution reconstruction model, and outputting to obtain a reconstructed high-resolution image.
Furthermore, the feature extraction module extracts shallow features from the input low-resolution image by adopting 3×3 convolution, and then inputs the shallow features into the nonlinear feature mapping module, and simultaneously converts the RGB channels into n channels.
Further, the nonlinear feature mapping module is composed of n (n is an integer greater than 1) serial feature channel dividing modules, wherein the output feature of the former feature channel dividing module is used as the input of the latter feature channel dividing module, and the output feature of the last feature channel dividing module is used as the input of the reconstruction module.
Further, each characteristic channel dividing module is used for respectively processing input characteristics by using characteristic channel division and fully utilizing characteristic information, and the main structure of the characteristic channel dividing module comprises 1 initial characteristic processing module, 3 channel dividing sub-modules, 3 residual blocks, 1 aggregation module and 1 channel attention module; the initial feature processing module is used for carrying out 3×3 convolution on the input shallow features and then inputting the shallow features into the channel dividing sub-module; each channel dividing sub-module is used for dividing the input image features by using channels to obtain features F to be refined reserved And coarse feature to be treated F coarse For F obtained reserved Refining it with a 1 x 1 residual block to obtain refined features F refined And output to the aggregation module; for F obtained coarse Taking 3X 3 convolution as the input of the next channel dividing sub-module; f obtained by last channel dividing sub-module coarse The refined characteristic F is obtained after 3X 3 convolution refined And output to the aggregation module; the aggregation module is used for integrating all F refined Feature aggregation, namely inputting aggregated features into a channel attention module after 1X 1 convolution; the channel attention module is used for refining the input characteristics and outputting the characteristics.
Further, the step S2 specifically includes: cutting the high resolution in the training data into 192 multiplied by 192 pictures, and reducing the low resolution according to the multiple of the super resolution; meanwhile, training data are rotated and turned over to enhance the data, so that a training set is obtained.
Further, the specific training process in step S3 is as follows:
s31, a pre-training model: firstly training a super-resolution model with the magnification of 2, and initializing a network by using model parameters with the magnification of 2 for a model with the magnification of 3 or 4;
s32, setting a loss function: using the L1 norm loss function, the formula is:
where θ represents a parameter of the model,representing a high resolution image reconstructed after the low resolution image was input into the model,/for>Representing a real image of the person, | x I 1 Represents L1 loss;
s33, weight updating algorithm: adam algorithm is used as a network optimizer, and initial parameters are set to be beta 1 =0.9,β 2 =0.99,ε=0.99;
S34, setting a learning rate: the initial learning rate is set to 2×10 -4 And the learning rate is halved every 200 batches;
and S35, training the model for 1000 batches, and then converging to obtain an optimal image super-resolution reconstruction system.
Compared with the prior art, the method solves the problem that the prior image super-resolution algorithm does not fully utilize information among features by utilizing feature channel division, effectively reduces the quantity of parameters and calculated quantity, better recovers the textures of the image, and improves the image super-resolution reconstruction performance.
Description of the drawings:
fig. 1 is a specific flow diagram of an image super-resolution reconstruction method based on feature channel division according to the present invention.
Fig. 2 is a schematic diagram of the overall structure of the image super-resolution reconstruction model according to the present invention.
Fig. 3 is a schematic diagram of a network structure of an image super-resolution reconstruction model according to the present invention.
Fig. 4 is a schematic diagram of a network structure of a characteristic channel dividing module according to the present invention.
The specific embodiment is as follows:
the invention will now be described in more detail with reference to the following examples and with reference to the accompanying drawings.
Example 1:
the image super-resolution reconstruction method based on feature channel division, which is related to the embodiment, comprises the following steps of
S1, constructing an image super-resolution reconstruction model, wherein the image super-resolution reconstruction model comprises a feature extraction module, a nonlinear feature mapping module and a reconstruction module; the feature extraction module, the nonlinear feature mapping module and the reconstruction module are connected in series; the feature extraction module is used for converting the low-resolution image from an RGB channel to an n channel and extracting shallow features as the input of the nonlinear feature mapping module; the nonlinear feature mapping module is used for extracting information of the input shallow features to obtain deep features and sending the deep features to the reconstruction module; the reconstruction module is used for obtaining a high-resolution image by adding an image obtained by up-sampling an input deep feature extraction information through a sub-pixel convolution to generate a residual image and a low-resolution image by using a nearest neighbor interpolation method.
The feature extraction module extracts shallow features from an input low-resolution image by adopting 3X 3 convolution, and then inputs the shallow features into the nonlinear feature mapping module, and simultaneously converts RGB channels into n channels.
The nonlinear feature mapping module consists of 6 feature channel dividing modules connected in series, wherein the output feature of the former feature channel dividing module is used as the input of the latter feature channel dividing module, and the output feature of the last feature channel dividing module is used as the input of the reconstruction module; each characteristic channel dividing module is used for respectively processing the characteristics by using characteristic channel division and fully utilizing the information of the characteristics, and the main body structure of the characteristic channel dividing module comprises 1 initial characteristic processing module, 3 channel dividing sub-modules, 3 residual blocks, 1 aggregation module and 1 channel attention module; the initial characteristic processing module is used for matchingInputting the input shallow features into a channel dividing sub-module after 3×3 convolution; each channel dividing sub-module is used for dividing the input image features by using channels to obtain features F to be refined reserved And coarse feature to be treated F coarse For F obtained reserved Refining it with a 1 x 1 residual block to obtain refined features F refined And output to the aggregation module; for F obtained coarse Taking 3X 3 convolution as the input of the next channel dividing sub-module; f obtained by last channel dividing sub-module coarse The refined characteristic F is obtained after 3X 3 convolution refined And output to the aggregation module; the aggregation module is used for integrating all F refined Feature aggregation, namely inputting aggregated features into a channel attention module after 1X 1 convolution; the channel attention module is used for refining the input characteristics and outputting the characteristics.
S2, constructing a training set, cutting 192 multiplied by 192 pictures with high resolution in training data, and reducing low resolution according to super resolution multiples; meanwhile, training data are rotated and turned over to enhance the data, so that a training set is obtained.
S3, training the image super-resolution reconstruction model, and training the image super-resolution reconstruction model in the step S1 by using the training set in the step S2, wherein the specific steps are as follows:
s31, a pre-training model: firstly training a super-resolution model with the magnification of 2, and initializing a network by using model parameters with the magnification of 2 for a model with the magnification of 3 or 4;
s32, setting a loss function: using the L1 norm loss function, the formula is:
where θ represents a parameter of the model,representing a high resolution image reconstructed after the low resolution image was input into the model,/for>Representing a real image of the person, | x I 1 Representing the loss of L1.
S33, weight updating algorithm: adam algorithm is used as a network optimizer, and initial parameters are set to be beta 1 =0.9,β 2 =0.99,ε=0.99;
S34, setting a learning rate: the initial learning rate is set to 2×10 -4 And the learning rate is halved every 200 batches;
and S35, training the model for 1000 batches, and then converging to obtain an optimal image super-resolution reconstruction system.
S4, inputting the image to be processed into a trained image super-resolution reconstruction model, and outputting to obtain a reconstructed high-resolution image.
The workflow of the image super-resolution reconstruction model related to the embodiment is as follows:
s11, shallow feature extraction is carried out on features of an input low-resolution image through 3X 3 convolution, and RGB channels are converted into n channels;
s12: the shallow layer characteristics are input into a nonlinear characteristic mapping module, the nonlinear characteristic mapping module is composed of characteristic channel dividing modules, the number of the nonlinear characteristic mapping module is 6,6 characteristic channel dividing modules are formed by connecting in series, the output characteristics of the former characteristic channel dividing module are used as the input of the latter characteristic channel dividing module, and the output characteristics of the final characteristic channel dividing module are used as the input of a reconstruction module; the workflow of each feature channel partitioning module is as follows:
s121, performing 3×3 convolution on the initial feature, and then dividing the initial feature by using a channel to obtain F reserved_1 And F coarse_1 The dividing ratio is 1:3, a step of;
s122 pair F reserved_1 Refining features using a 1 x 1 residual block to obtain F refined_1 For F coarse_1 Performing 3×3 convolution, and dividing by using channel to obtain F reserved_2 And F coarse_2 The dividing ratio is 1:3, a step of;
s123 for F reserved_2 After refining the features using 1 x 1 residual blocksObtaining F refined_2 For F coarse_2 Performing 3×3 convolution, and dividing by using channel to obtain F reserved_3 And F coarse_3 The dividing ratio is 1:3, a step of;
s124 pair F reserved_3 Refining features using a 1 x 1 residual block to obtain F refined_3 For F coarse_3 Performing 3×3 convolution to obtain F refined_4 ;
S125: will F refined_1 ,F refined_2 ,F refined_3 And F refined_4 Aggregation, namely refining the characteristics by using a channel attention module after carrying out 1X 1 convolution on the aggregated characteristics;
s13: the characteristic channel dividing modules are formed by adopting series connection, and the final output characteristic of the modules is used as the input of the next characteristic channel dividing module, wherein the output of the final characteristic channel dividing module is used as the input of the reconstruction module.
Claims (4)
1. The image super-resolution reconstruction method based on feature channel division is characterized by comprising the following specific steps of:
s1, constructing an image super-resolution reconstruction model, wherein the image super-resolution reconstruction model comprises a feature extraction module, a nonlinear feature mapping module and a reconstruction module; the feature extraction module, the nonlinear feature mapping module and the reconstruction module are connected in series; the feature extraction module is used for extracting shallow features from the input low-resolution image to serve as the input of the nonlinear feature mapping module; the nonlinear feature mapping module is used for extracting information of the input shallow features to obtain deep features and sending the deep features to the reconstruction module; the reconstruction module is used for generating a residual image through sub-pixel convolution after the input deep feature extraction information and adding the residual image and an image obtained after the up-sampling of the low-resolution image by using a nearest neighbor interpolation method to obtain a high-resolution image;
the nonlinear feature mapping module consists of n feature channel dividing modules connected in series, wherein the output features of the former feature channel dividing module are used as the input of the latter feature channel dividing module, and the output features of the last feature channel dividing module are used as the input of the reconstruction module;
each characteristic channel dividing module is used for respectively processing input characteristics by using characteristic channel division and fully utilizing characteristic information, and the main body structure of the characteristic channel dividing module comprises 1 initial characteristic processing module, 3 channel dividing sub-modules, 3 residual blocks, 1 aggregation module and 1 channel attention module; the initial feature processing module is used for carrying out 3×3 convolution on the input shallow features and then inputting the shallow features into the channel dividing sub-module; each channel dividing sub-module is used for dividing the input image features by using channels to obtain features F to be refined reserved And coarse feature to be treated F coarse For F obtained reserved Refining it with a 1 x 1 residual block to obtain refined features F refined And output to the aggregation module; for F obtained coarse Taking 3X 3 convolution as the input of the next channel dividing sub-module; f obtained by last channel dividing sub-module coarse The refined characteristic F is obtained after 3X 3 convolution refined And output to the aggregation module; the aggregation module is used for integrating all F refined Feature aggregation, namely inputting aggregated features into a channel attention module after 1X 1 convolution; the channel attention module is used for refining the input characteristics and outputting the characteristics;
s2, constructing a training set;
s3, training the image super-resolution reconstruction model, and training the image super-resolution reconstruction model in the step S1 by using the training set in the step S2, wherein the training comprises the steps of setting a loss function, a weight updating algorithm and a learning rate;
s4, inputting the image to be processed into a trained image super-resolution reconstruction model, and outputting to obtain a reconstructed high-resolution image.
2. The method for reconstructing an image super-resolution based on feature channel division according to claim 1, wherein the feature extraction module extracts shallow features from an input low-resolution image by using 3×3 convolution and then inputs the shallow features to the nonlinear feature mapping module, and converts RGB channels into n channels at the same time.
3. The method for reconstructing an image super-resolution based on feature channel division according to claim 1, wherein step S2 specifically comprises: cutting the high resolution in the training data into 192 multiplied by 192 pictures, and reducing the low resolution according to the multiple of the super resolution; meanwhile, training data are rotated and turned over to enhance the data, so that a training set is obtained.
4. The method for reconstructing an image super-resolution based on feature channel division according to claim 1, wherein the specific training process of step S3 is as follows:
s31, a pre-training model: firstly training a super-resolution model with the magnification of 2, and initializing a network by using model parameters with the magnification of 2 for a model with the magnification of 3 or 4;
s32, setting a loss function: using the L1 norm loss function, the formula is:
where θ represents a parameter of the model,representing a high resolution image reconstructed after the low resolution image was input into the model,representing a real image of the person, | x I 1 Represents L1 loss;
s33, weight updating algorithm: adam algorithm is used as a network optimizer, and initial parameters are set to be beta 1 =0.9,β 2 =0.99,ε=0.99;
S34, setting a learning rate: the initial learning rate is set to 2×10 -4 And the learning rate is halved every 200 batches;
and S35, training the model for 1000 batches, and then converging to obtain an optimal image super-resolution reconstruction system.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202110070247.0A CN112734646B (en) | 2021-01-19 | 2021-01-19 | Image super-resolution reconstruction method based on feature channel division |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202110070247.0A CN112734646B (en) | 2021-01-19 | 2021-01-19 | Image super-resolution reconstruction method based on feature channel division |
Publications (2)
Publication Number | Publication Date |
---|---|
CN112734646A CN112734646A (en) | 2021-04-30 |
CN112734646B true CN112734646B (en) | 2024-02-02 |
Family
ID=75592440
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202110070247.0A Active CN112734646B (en) | 2021-01-19 | 2021-01-19 | Image super-resolution reconstruction method based on feature channel division |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN112734646B (en) |
Families Citing this family (14)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN113139907B (en) * | 2021-05-18 | 2023-02-14 | 广东奥普特科技股份有限公司 | Generation method, system, device and storage medium for visual resolution enhancement |
CN113313632A (en) * | 2021-06-11 | 2021-08-27 | 展讯通信(天津)有限公司 | Image reconstruction method, system and related equipment |
CN113822802A (en) * | 2021-07-05 | 2021-12-21 | 腾讯科技(深圳)有限公司 | Microscope-based super-resolution method, device, equipment and medium |
CN113610706A (en) * | 2021-07-19 | 2021-11-05 | 河南大学 | Fuzzy monitoring image super-resolution reconstruction method based on convolutional neural network |
CN113793265A (en) * | 2021-09-14 | 2021-12-14 | 南京理工大学 | Image super-resolution method and system based on depth feature relevance |
CN115526775B (en) * | 2022-01-10 | 2023-09-22 | 荣耀终端有限公司 | Image data processing method and device |
CN114663280B (en) * | 2022-03-15 | 2024-07-16 | 北京万里红科技有限公司 | Super-resolution reconstruction model of long-distance iris image, training method, reconstruction method, device and medium |
CN114742706B (en) * | 2022-04-12 | 2023-11-28 | 内蒙古至远创新科技有限公司 | Water pollution remote sensing image super-resolution reconstruction method for intelligent environmental protection |
CN114926342B (en) * | 2022-05-31 | 2024-08-23 | 武汉大学 | Image super-resolution reconstruction model construction method, device, equipment and storage medium |
CN115063297A (en) * | 2022-06-30 | 2022-09-16 | 中国人民解放军战略支援部队信息工程大学 | Image super-resolution reconstruction method and system based on parameter reconstruction |
CN115205120A (en) * | 2022-07-26 | 2022-10-18 | 中国电信股份有限公司 | Image processing method, image processing apparatus, medium, and electronic device |
CN116912305B (en) * | 2023-09-13 | 2023-11-24 | 四川大学华西医院 | Brain CT image three-dimensional reconstruction method and device based on deep learning |
CN117132472B (en) * | 2023-10-08 | 2024-05-31 | 兰州理工大学 | Forward-backward separable self-attention-based image super-resolution reconstruction method |
CN117422614B (en) * | 2023-12-19 | 2024-03-12 | 华侨大学 | Single-frame image super-resolution method and device based on hybrid feature interaction transducer |
Citations (8)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
EP3166070A1 (en) * | 2015-11-09 | 2017-05-10 | Thomson Licensing | Method for upscaling noisy images, and apparatus for upscaling noisy images |
KR20190010489A (en) * | 2017-07-20 | 2019-01-30 | 한국과학기술원 | Image processing method and apparatus using selection unit |
CN109741256A (en) * | 2018-12-13 | 2019-05-10 | 西安电子科技大学 | Image super-resolution rebuilding method based on rarefaction representation and deep learning |
CN111192200A (en) * | 2020-01-02 | 2020-05-22 | 南京邮电大学 | Image super-resolution reconstruction method based on fusion attention mechanism residual error network |
CN111311488A (en) * | 2020-01-15 | 2020-06-19 | 广西师范大学 | Efficient super-resolution reconstruction method based on deep learning |
CN111461983A (en) * | 2020-03-31 | 2020-07-28 | 华中科技大学鄂州工业技术研究院 | Image super-resolution reconstruction model and method based on different frequency information |
CN111986085A (en) * | 2020-07-31 | 2020-11-24 | 南京航空航天大学 | Image super-resolution method based on depth feedback attention network system |
CN112037131A (en) * | 2020-08-31 | 2020-12-04 | 上海电力大学 | Single-image super-resolution reconstruction method based on generation countermeasure network |
Family Cites Families (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
KR102215757B1 (en) * | 2019-05-14 | 2021-02-15 | 경희대학교 산학협력단 | Method, apparatus and computer program for image segmentation |
-
2021
- 2021-01-19 CN CN202110070247.0A patent/CN112734646B/en active Active
Patent Citations (8)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
EP3166070A1 (en) * | 2015-11-09 | 2017-05-10 | Thomson Licensing | Method for upscaling noisy images, and apparatus for upscaling noisy images |
KR20190010489A (en) * | 2017-07-20 | 2019-01-30 | 한국과학기술원 | Image processing method and apparatus using selection unit |
CN109741256A (en) * | 2018-12-13 | 2019-05-10 | 西安电子科技大学 | Image super-resolution rebuilding method based on rarefaction representation and deep learning |
CN111192200A (en) * | 2020-01-02 | 2020-05-22 | 南京邮电大学 | Image super-resolution reconstruction method based on fusion attention mechanism residual error network |
CN111311488A (en) * | 2020-01-15 | 2020-06-19 | 广西师范大学 | Efficient super-resolution reconstruction method based on deep learning |
CN111461983A (en) * | 2020-03-31 | 2020-07-28 | 华中科技大学鄂州工业技术研究院 | Image super-resolution reconstruction model and method based on different frequency information |
CN111986085A (en) * | 2020-07-31 | 2020-11-24 | 南京航空航天大学 | Image super-resolution method based on depth feedback attention network system |
CN112037131A (en) * | 2020-08-31 | 2020-12-04 | 上海电力大学 | Single-image super-resolution reconstruction method based on generation countermeasure network |
Non-Patent Citations (2)
Title |
---|
Channel Splitting Network for Single MR Image Super-Resolution;Xiaole Zhao et al;《 IEEE Transactions on Image Processing》;第28卷(第11期);第5649-5662页 * |
基于深度可分离卷积和宽残差网络的医学影像超分辨率重建;高媛等;《计算机应用》;第39卷(第9期);第2731-2737页 * |
Also Published As
Publication number | Publication date |
---|---|
CN112734646A (en) | 2021-04-30 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN112734646B (en) | Image super-resolution reconstruction method based on feature channel division | |
CN110119780B (en) | Hyper-spectral image super-resolution reconstruction method based on generation countermeasure network | |
CN111127374B (en) | Pan-sharing method based on multi-scale dense network | |
CN110136062B (en) | Super-resolution reconstruction method combining semantic segmentation | |
CN108259994B (en) | Method for improving video spatial resolution | |
CN109741256A (en) | Image super-resolution rebuilding method based on rarefaction representation and deep learning | |
CN112837224A (en) | Super-resolution image reconstruction method based on convolutional neural network | |
CN107220957B (en) | It is a kind of to utilize the remote sensing image fusion method for rolling Steerable filter | |
Guo et al. | Multiscale semilocal interpolation with antialiasing | |
Yang et al. | Image super-resolution based on deep neural network of multiple attention mechanism | |
CN106920214A (en) | Spatial target images super resolution ratio reconstruction method | |
CN111861886B (en) | Image super-resolution reconstruction method based on multi-scale feedback network | |
CN112950480A (en) | Super-resolution reconstruction method integrating multiple receptive fields and dense residual attention | |
CN111833261A (en) | Image super-resolution restoration method for generating countermeasure network based on attention | |
CN117173025A (en) | Single-frame image super-resolution method and system based on cross-layer mixed attention transducer | |
CN110533591A (en) | Super resolution image reconstruction method based on codec structure | |
CN116681592A (en) | Image super-resolution method based on multi-scale self-adaptive non-local attention network | |
CN116703725A (en) | Method for realizing super resolution for real world text image by double branch network for sensing multiple characteristics | |
CN116563100A (en) | Blind super-resolution reconstruction method based on kernel guided network | |
CN117315735A (en) | Face super-resolution reconstruction method based on priori information and attention mechanism | |
CN112184552B (en) | Sub-pixel convolution image super-resolution method based on high-frequency feature learning | |
CN110288529A (en) | A kind of single image super resolution ratio reconstruction method being locally synthesized network based on recurrence | |
CN113128517B (en) | Tone mapping image mixed visual feature extraction model establishment and quality evaluation method | |
CN113096015A (en) | Image super-resolution reconstruction method based on progressive sensing and ultra-lightweight network | |
CN111899166A (en) | Medical hyperspectral microscopic image super-resolution reconstruction method based on deep learning |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |