CN107194912A - The brain CT/MR image interfusion methods of improvement coupling dictionary learning based on rarefaction representation - Google Patents
The brain CT/MR image interfusion methods of improvement coupling dictionary learning based on rarefaction representation Download PDFInfo
- Publication number
- CN107194912A CN107194912A CN201710259812.1A CN201710259812A CN107194912A CN 107194912 A CN107194912 A CN 107194912A CN 201710259812 A CN201710259812 A CN 201710259812A CN 107194912 A CN107194912 A CN 107194912A
- Authority
- CN
- China
- Prior art keywords
- mrow
- msubsup
- msup
- msub
- mtd
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
- 238000000034 method Methods 0.000 title claims abstract description 58
- 210000004556 brain Anatomy 0.000 title claims abstract description 26
- 230000013016 learning Effects 0.000 title claims abstract description 26
- 230000008878 coupling Effects 0.000 title abstract 2
- 238000010168 coupling process Methods 0.000 title abstract 2
- 238000005859 coupling reaction Methods 0.000 title abstract 2
- 230000004927 fusion Effects 0.000 claims abstract description 44
- 238000012549 training Methods 0.000 claims abstract description 16
- 239000011159 matrix material Substances 0.000 claims description 51
- 239000013598 vector Substances 0.000 claims description 27
- 238000004422 calculation algorithm Methods 0.000 claims description 12
- 238000007500 overflow downdraw method Methods 0.000 claims description 9
- 230000006870 function Effects 0.000 claims description 8
- 230000008569 process Effects 0.000 claims description 7
- 238000005457 optimization Methods 0.000 claims description 6
- 238000000354 decomposition reaction Methods 0.000 claims description 5
- 230000000873 masking effect Effects 0.000 claims description 3
- 238000005259 measurement Methods 0.000 claims description 3
- 238000007781 pre-processing Methods 0.000 claims description 2
- 238000012545 processing Methods 0.000 abstract description 6
- 208000003174 Brain Neoplasms Diseases 0.000 abstract description 4
- 238000003745 diagnosis Methods 0.000 abstract description 3
- 238000002474 experimental method Methods 0.000 abstract 1
- 238000010586 diagram Methods 0.000 description 8
- 238000004590 computer program Methods 0.000 description 7
- 238000012986 modification Methods 0.000 description 5
- 230000004048 modification Effects 0.000 description 5
- 230000009466 transformation Effects 0.000 description 5
- 210000004872 soft tissue Anatomy 0.000 description 4
- 208000024806 Brain atrophy Diseases 0.000 description 3
- 210000000988 bone and bone Anatomy 0.000 description 3
- 238000003860 storage Methods 0.000 description 3
- 238000013459 approach Methods 0.000 description 2
- 238000004364 calculation method Methods 0.000 description 2
- 230000000694 effects Effects 0.000 description 2
- 238000003384 imaging method Methods 0.000 description 2
- 230000003044 adaptive effect Effects 0.000 description 1
- 230000004075 alteration Effects 0.000 description 1
- 210000000845 cartilage Anatomy 0.000 description 1
- 238000003759 clinical diagnosis Methods 0.000 description 1
- 230000000295 complement effect Effects 0.000 description 1
- 238000013170 computed tomography imaging Methods 0.000 description 1
- 201000010099 disease Diseases 0.000 description 1
- 208000037265 diseases, disorders, signs and symptoms Diseases 0.000 description 1
- 230000009977 dual effect Effects 0.000 description 1
- 230000003902 lesion Effects 0.000 description 1
- 230000014759 maintenance of location Effects 0.000 description 1
- 238000004519 manufacturing process Methods 0.000 description 1
- 210000003205 muscle Anatomy 0.000 description 1
- 230000003287 optical effect Effects 0.000 description 1
- 210000000056 organ Anatomy 0.000 description 1
- 238000005070 sampling Methods 0.000 description 1
- 230000003595 spectral effect Effects 0.000 description 1
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/0002—Inspection of images, e.g. flaw detection
- G06T7/0012—Biomedical image inspection
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/10—Segmentation; Edge detection
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/10—Image acquisition modality
- G06T2207/10072—Tomographic images
- G06T2207/10081—Computed x-ray tomography [CT]
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/10—Image acquisition modality
- G06T2207/10072—Tomographic images
- G06T2207/10088—Magnetic resonance imaging [MRI]
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/20—Special algorithmic details
- G06T2207/20081—Training; Learning
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/20—Special algorithmic details
- G06T2207/20212—Image combination
- G06T2207/20221—Image fusion; Image merging
Landscapes
- Engineering & Computer Science (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Theoretical Computer Science (AREA)
- General Physics & Mathematics (AREA)
- Physics & Mathematics (AREA)
- Medical Informatics (AREA)
- Quality & Reliability (AREA)
- Radiology & Medical Imaging (AREA)
- Nuclear Medicine, Radiotherapy & Molecular Imaging (AREA)
- Health & Medical Sciences (AREA)
- General Health & Medical Sciences (AREA)
- Magnetic Resonance Imaging Apparatus (AREA)
- Image Analysis (AREA)
Abstract
The invention discloses the brain CT/MR image interfusion methods of the improvement coupling dictionary learning based on rarefaction representation, it is related to technical field of image processing, can be respectively to normal brain, three groups of brain medical images of encephalatrophy and brain tumor are merged, many experiments result shows ICDL methods proposed by the present invention and the method based on multi-scale transform, the method of traditional rarefaction representation, the method of method and multi-scale dictionary study based on K SVD dictionary learnings is compared, not only increase the quality of brain Medical image fusion, and effectively reduce the time of dictionary training, effectively help can be provided for clinical treatment diagnosis.
Description
Technical Field
The invention relates to the technical field of image processing, in particular to a brain CT/MR image fusion method based on sparse representation and improved coupled dictionary learning.
Background
In the medical field, doctors need to study and analyze a single image having both high spatial and high spectral information in order to accurately diagnose and treat diseases. This type of information cannot be obtained from single modality images only, for example, CT imaging can capture bone structures of the human body with higher resolution, while MR imaging can capture detailed information of soft tissues of organs of the human body such as muscle, cartilage, fat, etc. Therefore, the complementary information of the CT image and the MR image are fused to obtain more comprehensive and abundant image information, and the effective help can be provided for clinical diagnosis and auxiliary treatment.
The more classical method applied to the field of brain medical image fusion at present is a method based on multi-scale transformation: discrete Wavelet Transform (DWT), Stationary Wavelet Transform (SWT), dual tree complex wavelet transform (dtctt), Laplacian Pyramid (LP), non-subsampled contourlet transform (NSCT). The method based on multi-scale transformation can well extract the salient features of the image, but is sensitive to image misregistration, and the traditional fusion strategy also enables the fusion result not to retain the detail information such as the edge and texture of the source image. With the rise of compressed sensing, the method based on sparse representation is widely used in the field of image fusion, and obtains excellent fusion effect. Yang.b et al sparsely represent the source image using redundant DCT dictionaries and fuse the sparse coefficients using a "select max" rule. The DCT dictionary is an implicit dictionary formed by DCT transformation, which is easy to implement quickly, but has limited representation capability. Elad et al propose a K-SVD algorithm for learning dictionaries from training images. Compared with a DCT dictionary, the learning dictionary is an explicit dictionary adaptive to the source image and has stronger representation capability. In the learned dictionary, a dictionary obtained by sampling and training only from a natural image is called as a single dictionary, the single dictionary can represent any natural image with the same category as a training sample, but for a brain medical image with a complex structure, the single dictionary is used for representing a CT image and an MR image, and an accurate sparse representation coefficient is difficult to obtain. Ophir et al propose a multi-scale dictionary learning method on wavelet domain. Namely, on the wavelet domain, all sub-bands are trained by using a K-SVD algorithm respectively to obtain sub-dictionaries corresponding to all sub-bands. The multi-scale dictionary effectively combines the advantages of both the parsing dictionary and the learned dictionary, enabling the capture of different features contained in images at different scales and in different directions. But the sub-dictionaries of all sub-bands are also single dictionaries, the sparse representation of all sub-bands by using the sub-dictionaries still has difficulty in obtaining accurate sparse representation coefficients, and the learning time efficiency of the separated dictionaries is low. Yu.N et al propose an image fusion method based on joint sparse representation and having a denoising function. The method comprises the steps of performing dictionary learning on a source image to be fused, extracting common features and respective features of the image to be fused according to a JSM-1 model, and then respectively combining and reconstructing to obtain a fused image. The method is suitable for brain medical images because the dictionary is trained for the source image to be fused, and accurate sparse representation coefficients can be obtained. But, the dictionary needs to be trained for each source image to be fused, so that the time efficiency is low and the flexibility is poor.
Disclosure of Invention
The embodiment of the invention provides a brain CT/MR image fusion method based on sparse representation and improved coupled dictionary learning, which can solve the problems in the prior art.
A brain CT/MR image fusion method based on sparse representation and improved coupled dictionary learning comprises the following steps:
a pretreatment stage: for the CT/MR source image I of the brain which is already registeredC,IR∈RMN,RMNRepresenting a vector space with M rows and N columns, using a sliding window of step size 1 to represent the source image IC,IRAre respectively divided intoImage blocks of size, for each CT source image ICAnd MR source image IRAll are provided withAn image block, thenCompiling the image blocks into m-dimensional column vectors, and compiling the CT source image ICThe jth image block in (1)MR source image IRThe jth image block in (1)Subtract the respective average:
wherein,andrespectively representAndmean of all elements in (1) represents an m-dimensional column vector of all 1;
a fusion stage: solving using the CoefROMP algorithmThe formula is as follows:
wherein | α | purple0Indicates the number of non-zero elements in the sparse coefficient α, indicates the accuracy of the allowable deviation, DFRepresentation dictionary DCAnd DRObtaining a fused dictionary after fusion;
will sparse coefficient of l2The norm is used as the liveness measurement of the source image, and then the sparse coefficient is obtainedAndfusing by the following fusion rules:
mean valueAndusing a "weighted average" rule fusion:
wherein,thenAndthe fusion result of (a) is:
a reconstruction stage: performing a preprocessing stage and a fusion stage on all image blocks to obtain a fusion result of all image blocks, for each block vectorReshaped by the process of reverse sliding windowThe image blocks are put back to the corresponding pixel positions, and then the repeated pixels are averaged to obtain the final fusion image IF。
Preferably, in the fusion phase, the fused dictionary is obtained by calculation through the following method:
using high quality CT and MR images as a training set, vector pairs { X ] are sampled from the training setC,XR}, defineA matrix of n sampled CT image vectors,a matrix formed for corresponding n sampled MR image vectors, wherein Rd×nRepresenting a vector space with d rows and n columns;
adding complete-support prior information on the basis of the dictionary learning cost function, and alternately updating DC,DRAnd A, the corresponding training optimization problem is as follows:
wherein A is XCAnd XRτ is the sparsity of the joint sparse coefficient matrix a, ⊙ represents a point multiplication, the mask matrix M consists of elements 0 and 1, defined as M { | a | ═ 0}, equivalent to M (i, j) ═ 1 if a (i, j) ═ 0, and 0 otherwise, introducing an auxiliary variable:
formula (1) can be equivalently converted into:
the solving process of the formula (3) comprises two steps of sparse coding and dictionary updating:
firstly, in the sparse coding stage, a random matrix initializes a dictionaryAndthe update of the joint sparse coefficient matrix a is achieved by solving equation (4):
if the non-zero elements of each column in the joint sparse coefficient matrix a are processed separately and the zero elements are kept complete, equation (4) can be converted into the following equation:
in the formula,is thatSubmatrix of non-zero subset corresponding to A, αiIs a non-zero part of the ith column A, and the formula (5) is solved by a coefficient reuse orthogonal matching pursuit algorithm CoefROMP to obtain an updated joint sparse coefficient matrix A;
secondly, in the dictionary updating stage, the optimization problem of the formula (3) is converted into:
the compensation term of equation (6) is written as:
in the formula,representation dictionaryThe k-th column to be updated,represents the kth row of the joint sparse coefficient matrix a,line j representing mask matrix M for assuranceWith zero elements in the correct position, mask matrixIs to form a row vectorCopying d times to obtain matrix with size d × n and rank 1, and masking matrixCan be effectively removedThe columns of the sample corresponding to the k-th atom are not used, and the error matrix EkSingular Value Decomposition (SVD) to obtain Ek=UΔVTUpdating the dictionary using the first column of the matrix UAtom of (1)While thinning out the coefficient matrix AUpdating to the product of the first column of the matrix V and Δ (1, 1);
and finally, circularly executing two stages of sparse coding and dictionary updating until reaching a preset iteration number, and outputting a pair of coupled DCAnd DRA dictionary.
Preferably, the dictionary D is processed using the following methodCAnd DRCarrying out fusion:
LC(n) and LR(N), N is 1,2, …, N represents the characteristic index of the nth atom of CT dictionary and MR dictionary respectively, and the fusion formula is expressed as follows:
where λ is 0.25.
The brain CT/MR image fusion method based on sparse representation and improved coupled dictionary learning provided by the embodiment of the invention can respectively fuse three groups of brain medical images of normal brain, brain atrophy and brain tumor, and multiple experimental results show that compared with a multi-scale transformation-based method, a traditional sparse representation method, a K-SVD dictionary learning-based method and a multi-scale dictionary learning method, the ICDL method provided by the invention not only improves the quality of brain medical image fusion, but also effectively reduces the dictionary training time and can provide effective help for clinical medical diagnosis.
Drawings
In order to more clearly illustrate the embodiments of the present invention or the technical solutions in the prior art, the drawings used in the description of the embodiments or the prior art will be briefly described below, it is obvious that the drawings in the following description are only some embodiments of the present invention, and for those skilled in the art, other drawings can be obtained according to the drawings without creative efforts.
FIG. 1 is a flowchart of a brain CT/MR image fusion method based on sparse representation and improved coupled dictionary learning according to an embodiment of the present invention;
FIG. 2 is a high quality CT and MR image as a training set;
fig. 3 is a CT/MR fusion result of a normal brain, where a is a CT image, b is an MR image, c is a DWT (discrete wavelet transform) image, d is an SWT (smooth wavelet transform) image, e is an NSCT (non-downsampling contourlet transform) image, f is an SRM (conventional sparse representation method) image, g is an SRK (K-SVD Dictionary learning method-based) image, h is an MDL (multi-scale Dictionary learning method-based) image, and i is an icdl (enhanced Coupled Dictionary learning) image used in the present invention;
FIG. 4 shows CT/MR fusion results of brain atrophy;
FIG. 5 shows the CT/MR fusion results of brain tumors.
Detailed Description
The technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are only a part of the embodiments of the present invention, and not all of the embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
Referring to fig. 1, a brain CT/MR image fusion method based on sparse representation and improved coupled dictionary learning provided in an embodiment of the present invention includes the following steps:
step 100, a pretreatment stage: for the CT/MR source image I of the brain which is already registeredC,IR∈RMN,RMNRepresenting a vector space with M rows and N columns, using a sliding window of step size 1 to represent the source image IC,IRAre respectively divided intoImage blocks of size, for each CT source image ICAnd MR source image IRAll are provided withEncoding the image blocks into m-dimensional column vector, and encoding the CT source image ICThe jth image block in (1)MR source image IRThe jth image block in (1)Subtract the respective average:
wherein,andrespectively representAndmean of all elements in (1) represents an m-dimensional column vector of all 1;
step 200, fusion stage: solving using the CoefROMP algorithmThe formula is as follows:
wherein | α | purple0Indicates the number of non-zero elements in the sparse coefficient α, indicates the accuracy of the allowable deviation, DFRepresentation dictionary DCAnd DRThe specific calculation method of the fused dictionary obtained after fusion is as follows:
using the high quality CT and MR images shown in FIG. 2 as the training set, vector pairs { X ] are sampled from the training setC,XR}, defineA matrix of n sampled CT image vectors,a matrix formed for corresponding n sampled MR image vectors, whereinRd×nRepresenting a vector space with d rows and n columns;
the coupled dictionary training of the invention uses an improved K-SVD algorithm which adds complete supporting prior information on the basis of the traditional dictionary learning cost function and alternately updates DC,DRAnd A, the corresponding training optimization problem is as follows:
wherein A is XCAnd XRτ is the sparsity of the joint sparse coefficient matrix a, ⊙ denotes a dot product, the mask matrix M is composed of elements 0 and 1, defined as M { | a | ═ 0}, equivalent to M (i, j) ═ 1 if a (i, j) ═ 0, and 0 otherwise, a ⊙ M ═ 0 can keep all the zero entries in a complete.
Then formula (3) can be equivalently converted into:
the solving process of the formula (5) comprises two steps of sparse coding and dictionary updating.
Firstly, in the sparse coding stage, a random matrix initializes a dictionaryAndthe update of the joint sparse coefficient matrix a is achieved by solving equation (6):
if the non-zero elements of each column in the joint sparse coefficient matrix a are processed separately and the zero elements are kept complete, equation (6) can be converted into the following equation:
in the formula,is thatSubmatrix of non-zero subset corresponding to A, αiIs a non-zero part of column i. Equation (7) is solved by the coefficient reuse orthogonal matching pursuit algorithm CoefROMP, and thus an updated joint sparse coefficient matrix a can be obtained.
Secondly, in the dictionary updating stage, the optimization problem of equation (5) can be converted into:
the compensation term of equation (8) can be written as:
in the formula,representation dictionaryThe k-th column to be updated,represents the kth row of the joint sparse coefficient matrix a,line j representing mask matrix M for assuranceIs in the correct position. Mask matrixIs to form a row vectorCopying d times to obtain matrix with size d × n and rank 1, and masking matrixCan be effectively removedThe columns of the sample corresponding to the k-th atom are not used. For error matrix EkSingular Value Decomposition (SVD) to obtain Ek=UΔVTUpdating the dictionary using the first column of the matrix UAtom of (1)While thinning out the coefficient matrix AUpdated as the product of the first column of matrix V and Δ (1, 1).
And finally, circularly executing two stages of sparse coding and dictionary updating until reaching a preset iteration number, and outputting a pair of coupled DCAnd DRA dictionary. Then using the following method forDictionary DCAnd DRCarrying out fusion:
LC(n) and LR(N), N is 1,2, …, N represents the characteristic index of the N-th atom of CT dictionary and MR dictionary respectively, because the CT and MR images of brain are the images obtained by different imaging devices corresponding to the same part of human body, there must be a common characteristic and respective characteristics between them. The invention proposes to regard the atoms with larger difference of characteristic indexes as respective characteristics and use the rule of 'selecting the maximum' for fusion. The atoms with smaller difference of characteristic indexes are regarded as common characteristics, and are fused by using an average rule, and the formula is expressed as follows:
let λ be 0.25 here, and use the information entropy as the feature index according to the physical characteristics of the medical image. The method combines the sparse domain method and the space domain method, considers the physical characteristics of the medical image to calculate the characteristic indexes of the dictionary atoms, and has more definite physical significance compared with the sparse domain method.
And in the dictionary updating stage, the dictionary and the non-zero elements of the sparse representation coefficients are updated simultaneously, so that the representation error of the dictionary is smaller and the convergence speed of the dictionary is higher. In the sparse coding stage, considering that the representation of the previous iteration is ignored during each iteration, the CoefROMP algorithm proposes that the coefficient is updated by using the sparse representation residual information of the previous iteration, so that the solution of the required problem is obtained more quickly.
Calculating to obtain a fused dictionary DFThen, the l of the sparse coefficient is2The norm is used as the liveness measurement of the source image, and then the sparse coefficient is obtainedAndfusing by the following fusion rules:
Mean valueAndusing a "weighted average" rule fusion:
wherein,thenAndthe fusion result of (a) is:
step 300, a reconstruction phase: and executing the two steps on all the image blocks to obtain the fusion result of all the image blocks. For each block vectorReshaped by the process of reverse sliding windowThe image blocks are put back to the corresponding pixel positions, and then the repeated pixels are averaged to obtain the final fusion image IF。
To verify the effectiveness of the method of the present invention, three sets of registered brain CT/MR images are selected for fusion, wherein the CT/MR images are respectively a normal brain CT/MR image as shown in a and b of fig. 3, a brain atrophy CT/MR image as shown in a and b of fig. 4, a brain tumor CT/MR image as shown in a and b of fig. 5, and the image sizes are 256 × 256. The selected comparison algorithm comprises the following steps: discrete Wavelet Transform (DWT), Stationary Wavelet Transform (SWT), nonsubsampled contourlet transform (NSCT), traditional Sparse Representation Method (SRM), K-SVD dictionary learning-based method (SRK), multi-scale dictionary learning-based Method (MDL), and the fusion results are respectively shown in c, d, e, f, g, and h in fig. 3, c, d, e, f, g, and h in fig. 4, and c, d, e, f, g, and h in fig. 5.
In the multi-scale transform based approach, the decomposition level is set to 3 for both DWT and SWT approaches, and the wavelet bases are set to "db 6" and "bior 1.1", respectively. The NSCT method uses a "9-7" pyramid filter and a "c-d" directional filter, with the decomposition level set to {2 }2,22,23,24}. in the sparse representation based method, the sliding step is 1, the image blocks are all 8 × 8, the dictionary sizes are all 64 × 256, the error is 0.01, and the sparsity τ is 6, the ICDL method uses the improved K-SVD algorithm, and 6 multiple Dictionary Update Cycles (DUCs) and 30 iterations are performed.
3-5, the fused image edge texture of the DWT method is fuzzy, the image information is distorted and the block effect exists; compared with a DWT method, the fusion quality of the SWT method and the NSCT method is relatively good, the brightness, the contrast and the definition of an image are greatly improved, but the problems of edge brightness distortion and artifacts in soft tissues and focus areas still exist; compared with the method based on multi-scale transformation, the SRM and SRK method has the advantages that the bone tissues and the soft tissues of the image are clearer, the artifacts are reduced, and the focus area can be well identified; compared with the SRM and SRK methods, the MDL method can keep more detail information, the image quality is further improved, and partial artifacts still exist; the ICDL method provided by the invention is superior to other methods in the brightness, contrast, definition and detail retention of images, the fused images have no artifacts, the bone tissues, soft tissues and lesion areas are displayed clearly, and the diagnosis of doctors is facilitated.
As will be appreciated by one skilled in the art, embodiments of the present invention may be provided as a method, system, or computer program product. Accordingly, the present invention may take the form of an entirely hardware embodiment, an entirely software embodiment or an embodiment combining software and hardware aspects. Furthermore, the present invention may take the form of a computer program product embodied on one or more computer-usable storage media (including, but not limited to, disk storage, CD-ROM, optical storage, and the like) having computer-usable program code embodied therein.
The present invention is described with reference to flowchart illustrations and/or block diagrams of methods, apparatus (systems), and computer program products according to embodiments of the invention. It will be understood that each flow and/or block of the flow diagrams and/or block diagrams, and combinations of flows and/or blocks in the flow diagrams and/or block diagrams, can be implemented by computer program instructions. These computer program instructions may be provided to a processor of a general purpose computer, special purpose computer, embedded processor, or other programmable data processing apparatus to produce a machine, such that the instructions, which execute via the processor of the computer or other programmable data processing apparatus, create means for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be stored in a computer-readable memory that can direct a computer or other programmable data processing apparatus to function in a particular manner, such that the instructions stored in the computer-readable memory produce an article of manufacture including instruction means which implement the function specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be loaded onto a computer or other programmable data processing apparatus to cause a series of operational steps to be performed on the computer or other programmable apparatus to produce a computer implemented process such that the instructions which execute on the computer or other programmable apparatus provide steps for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
While preferred embodiments of the present invention have been described, additional variations and modifications in those embodiments may occur to those skilled in the art once they learn of the basic inventive concepts. Therefore, it is intended that the appended claims be interpreted as including preferred embodiments and all such alterations and modifications as fall within the scope of the invention.
It will be apparent to those skilled in the art that various changes and modifications may be made in the present invention without departing from the spirit and scope of the invention. Thus, if such modifications and variations of the present invention fall within the scope of the claims of the present invention and their equivalents, the present invention is also intended to include such modifications and variations.
Claims (3)
1. A brain CT/MR image fusion method based on sparse representation and improved coupled dictionary learning is characterized by comprising the following steps:
a pretreatment stage: for the CT/MR source image I of the brain which is already registeredC,IR∈RMN,RMNRepresenting a vector space with M rows and N columns, using a sliding window of step size 1 to represent the source image IC,IRAre respectively divided intoImage blocks of size, for each CT source image ICAnd MR source image IRAll are provided withEncoding the image blocks into m-dimensional column vector, and encoding the CT source image ICThe jth image block in (1)MR source image IRThe jth image block in (1)Subtract the respective average:
<mrow> <msubsup> <mover> <mi>x</mi> <mo>^</mo> </mover> <mi>C</mi> <mi>j</mi> </msubsup> <mo>=</mo> <msubsup> <mi>x</mi> <mi>C</mi> <mi>j</mi> </msubsup> <mo>-</mo> <msubsup> <mi>m</mi> <mi>C</mi> <mi>j</mi> </msubsup> <mo>&CenterDot;</mo> <mn>1</mn> </mrow>
<mrow> <msubsup> <mover> <mi>x</mi> <mo>^</mo> </mover> <mi>R</mi> <mi>j</mi> </msubsup> <mo>=</mo> <msubsup> <mi>x</mi> <mi>R</mi> <mi>j</mi> </msubsup> <mo>-</mo> <msubsup> <mi>m</mi> <mi>R</mi> <mi>j</mi> </msubsup> <mo>&CenterDot;</mo> <mn>1</mn> </mrow>
wherein,andrespectively representAndmean of all elements in (1) represents an m-dimensional column vector of all 1;
a fusion stage: solving using the CoefROMP algorithmThe formula is as follows:
<mfenced open = "" close = ""> <mtable> <mtr> <mtd> <mrow> <msubsup> <mi>&alpha;</mi> <mi>C</mi> <mi>j</mi> </msubsup> <mo>=</mo> <munder> <mi>argmin</mi> <mi>&alpha;</mi> </munder> <mo>|</mo> <mo>|</mo> <mi>&alpha;</mi> <mo>|</mo> <msub> <mo>|</mo> <mn>0</mn> </msub> </mrow> </mtd> <mtd> <mrow> <mi>s</mi> <mo>.</mo> <mi>t</mi> <mo>.</mo> <mo>|</mo> <mo>|</mo> <msubsup> <mover> <mi>x</mi> <mo>^</mo> </mover> <mi>C</mi> <mi>j</mi> </msubsup> <mo>-</mo> <msub> <mi>D</mi> <mi>F</mi> </msub> <mi>&alpha;</mi> <mo>|</mo> <msub> <mo>|</mo> <mn>2</mn> </msub> <mo><</mo> <mi>&epsiv;</mi> </mrow> </mtd> </mtr> </mtable> </mfenced>
<mfenced open = "" close = ""> <mtable> <mtr> <mtd> <mrow> <msubsup> <mi>&alpha;</mi> <mi>R</mi> <mi>j</mi> </msubsup> <mo>=</mo> <munder> <mi>argmin</mi> <mi>&alpha;</mi> </munder> <mo>|</mo> <mo>|</mo> <mi>&alpha;</mi> <mo>|</mo> <msub> <mo>|</mo> <mn>0</mn> </msub> </mrow> </mtd> <mtd> <mrow> <mi>s</mi> <mo>.</mo> <mi>t</mi> <mo>.</mo> <mo>|</mo> <mo>|</mo> <msubsup> <mover> <mi>x</mi> <mo>^</mo> </mover> <mi>R</mi> <mi>j</mi> </msubsup> <mo>-</mo> <msub> <mi>D</mi> <mi>F</mi> </msub> <mi>&alpha;</mi> <mo>|</mo> <msub> <mo>|</mo> <mn>2</mn> </msub> <mo><</mo> <mi>&epsiv;</mi> </mrow> </mtd> </mtr> </mtable> </mfenced>
wherein | α | purple0Indicates the number of non-zero elements in the sparse coefficient α, indicates the accuracy of the allowable deviation, DFRepresentation dictionary DCAnd DRObtaining a fused dictionary after fusion;
will sparse coefficient of l2The norm is used as the liveness measurement of the source image, and then the sparse coefficient is obtainedAndfusing by the following fusion rules:
<mrow> <msubsup> <mi>&alpha;</mi> <mi>F</mi> <mi>j</mi> </msubsup> <mo>=</mo> <mfenced open = "{" close = ""> <mtable> <mtr> <mtd> <msubsup> <mi>&alpha;</mi> <mi>C</mi> <mi>j</mi> </msubsup> <mo>,</mo> <mi>i</mi> <mi>f</mi> <mo>|</mo> <mo>|</mo> <msubsup> <mi>&alpha;</mi> <mi>C</mi> <mi>j</mi> </msubsup> <mo>|</mo> <msub> <mo>|</mo> <mn>2</mn> </msub> <mo>&GreaterEqual;</mo> <mo>|</mo> <mo>|</mo> <msubsup> <mi>&alpha;</mi> <mi>R</mi> <mi>j</mi> </msubsup> <mo>|</mo> <msub> <mo>|</mo> <mn>2</mn> </msub> <mo>,</mo> </mtd> </mtr> <mtr> <mtd> <msubsup> <mi>&alpha;</mi> <mi>R</mi> <mi>j</mi> </msubsup> <mo>,</mo> <mi>o</mi> <mi>t</mi> <mi>h</mi> <mi>e</mi> <mi>r</mi> <mi>w</mi> <mi>i</mi> <mi>s</mi> <mi>e</mi> </mtd> </mtr> </mtable> </mfenced> </mrow>
mean valueAndusing a "weighted average" rule fusion:
<mrow> <msubsup> <mi>m</mi> <mi>F</mi> <mi>j</mi> </msubsup> <mo>=</mo> <mi>w</mi> <mo>&CenterDot;</mo> <msubsup> <mi>m</mi> <mi>C</mi> <mi>j</mi> </msubsup> <mo>+</mo> <mrow> <mo>(</mo> <mn>1</mn> <mo>-</mo> <mi>w</mi> <mo>)</mo> </mrow> <mo>&CenterDot;</mo> <msubsup> <mi>m</mi> <mi>R</mi> <mi>j</mi> </msubsup> </mrow>
wherein,thenAndthe fusion result of (a) is:
<mrow> <msubsup> <mi>x</mi> <mi>F</mi> <mi>j</mi> </msubsup> <mo>=</mo> <msub> <mi>D</mi> <mi>F</mi> </msub> <msubsup> <mi>&alpha;</mi> <mi>F</mi> <mi>j</mi> </msubsup> <mo>+</mo> <msubsup> <mi>m</mi> <mi>F</mi> <mi>j</mi> </msubsup> <mo>&CenterDot;</mo> <mn>1</mn> </mrow>
a reconstruction stage: performing a preprocessing stage and a fusion stage on all image blocks to obtain a fusion result of all image blocks, for each block vectorReshaped by the process of reverse sliding windowThe image blocks are put back to the corresponding pixel positions, and then the repeated pixels are averaged to obtain the final fusion image IF。
2. The method of claim 1, wherein in the fusion phase, the fused dictionary is computed by:
using high quality CT and MR images as a training set, vector pairs { X ] are sampled from the training setC,XR}, defineA matrix of n sampled CT image vectors,a matrix formed for corresponding n sampled MR image vectors, wherein Rd×nRepresenting a vector space with d rows and n columns;
adding complete-support prior information on the basis of the dictionary learning cost function, and alternately updating DC,DRAnd A, the corresponding training optimization problem is as follows:
wherein A is XCAnd XRτ is the sparsity of the joint sparse coefficient matrix a, ⊙ represents a dot product, the mask matrix M consists of elements 0 and 1, defined as M ═ { a | ═ 0}, equivalent to M (i, j) ═ 1 if a (i, j) ═ 0, and 0 otherwise, introducing an auxiliary variable:
<mrow> <mover> <mi>X</mi> <mo>&OverBar;</mo> </mover> <mo>=</mo> <mfenced open = "[" close = "]"> <mtable> <mtr> <mtd> <msup> <mi>X</mi> <mi>C</mi> </msup> </mtd> </mtr> <mtr> <mtd> <msup> <mi>X</mi> <mi>R</mi> </msup> </mtd> </mtr> </mtable> </mfenced> <mo>,</mo> <mover> <mi>D</mi> <mo>&OverBar;</mo> </mover> <mo>=</mo> <mfenced open = "[" close = "]"> <mtable> <mtr> <mtd> <msup> <mi>D</mi> <mi>C</mi> </msup> </mtd> </mtr> <mtr> <mtd> <msup> <mi>D</mi> <mi>R</mi> </msup> </mtd> </mtr> </mtable> </mfenced> <mo>-</mo> <mo>-</mo> <mo>-</mo> <mrow> <mo>(</mo> <mn>2</mn> <mo>)</mo> </mrow> </mrow>
formula (1) can be equivalently converted into:
the solving process of the formula (3) comprises two steps of sparse coding and dictionary updating:
firstly, in the sparse coding stage, a random matrix initializes a dictionaryAndthe update of the joint sparse coefficient matrix a is achieved by solving equation (4):
if the non-zero elements of each column in the joint sparse coefficient matrix a are processed separately and the zero elements are kept complete, equation (4) can be converted into the following equation:
<mrow> <msub> <mi>&alpha;</mi> <mi>i</mi> </msub> <mo>=</mo> <munder> <mi>argmin</mi> <msub> <mi>&alpha;</mi> <mi>i</mi> </msub> </munder> <mo>|</mo> <mo>|</mo> <mover> <msub> <mi>x</mi> <mi>i</mi> </msub> <mo>&OverBar;</mo> </mover> <mo>-</mo> <mover> <msub> <mi>D</mi> <mi>i</mi> </msub> <mo>&OverBar;</mo> </mover> <msub> <mi>&alpha;</mi> <mi>i</mi> </msub> <mo>|</mo> <msubsup> <mo>|</mo> <mn>2</mn> <mn>2</mn> </msubsup> <mo>-</mo> <mo>-</mo> <mo>-</mo> <mrow> <mo>(</mo> <mn>5</mn> <mo>)</mo> </mrow> </mrow>
in the formula,is thatSubmatrix of non-zero subset corresponding to A, αiIs a non-zero part of the ith column A, and the formula (5) is solved by a coefficient reuse orthogonal matching pursuit algorithm CoefROMP to obtain an updated joint sparse coefficient matrix A;
secondly, in the dictionary updating stage, the optimization problem of the formula (3) is converted into:
the compensation term of equation (6) is written as:
in the formula,representation dictionaryThe k-th column to be updated,represents the kth row of the joint sparse coefficient matrix a,line j representing mask matrix M for assuranceWith zero elements in the correct position, mask matrixIs to form a row vectorCopying d times to obtain matrix with size d × n and rank 1, and masking matrixCan be effectively removedThe columns of the sample corresponding to the k-th atom are not used, and the error matrix EkSingular Value Decomposition (SVD) to obtain Ek=UΔVTUpdating the dictionary using the first column of the matrix UAtom of (1)While thinning out the coefficient matrix AUpdating to the product of the first column of the matrix V and Δ (1, 1);
and finally, circularly executing two stages of sparse coding and dictionary updating until reaching a preset iteration number, and outputting a pair of coupled DCAnd DRA dictionary.
3. The method of claim 1, wherein the dictionary D is paired using the following methodCAnd DRCarrying out fusion:
LC(n) and LR(N), N is 1,2, …, N represents the characteristic index of the nth atom of CT dictionary and MR dictionary respectively, and the fusion formula is expressed as follows:
<mrow> <msup> <mi>D</mi> <mi>F</mi> </msup> <mrow> <mo>(</mo> <mi>n</mi> <mo>)</mo> </mrow> <mo>=</mo> <mfenced open = "{" close = ""> <mtable> <mtr> <mtd> <mrow> <msup> <mi>D</mi> <mi>C</mi> </msup> <mrow> <mo>(</mo> <mi>n</mi> <mo>)</mo> </mrow> <mo>,</mo> <mi>i</mi> <mi>f</mi> <mi> </mi> <msup> <mi>L</mi> <mi>C</mi> </msup> <mrow> <mo>(</mo> <mi>n</mi> <mo>)</mo> </mrow> <mo>></mo> <msup> <mi>L</mi> <mi>R</mi> </msup> <mrow> <mo>(</mo> <mi>n</mi> <mo>)</mo> </mrow> <mi>a</mi> <mi>n</mi> <mi>d</mi> <mfrac> <mrow> <mo>|</mo> <msup> <mi>L</mi> <mi>C</mi> </msup> <mrow> <mo>(</mo> <mi>n</mi> <mo>)</mo> </mrow> <mo>-</mo> <msup> <mi>L</mi> <mi>R</mi> </msup> <mrow> <mo>(</mo> <mi>n</mi> <mo>)</mo> </mrow> <mo>|</mo> </mrow> <mrow> <mo>|</mo> <msup> <mi>L</mi> <mi>C</mi> </msup> <mrow> <mo>(</mo> <mi>n</mi> <mo>)</mo> </mrow> <mo>+</mo> <msup> <mi>L</mi> <mi>R</mi> </msup> <mrow> <mo>(</mo> <mi>n</mi> <mo>)</mo> </mrow> <mo>|</mo> </mrow> </mfrac> <mo>&GreaterEqual;</mo> <mi>&lambda;</mi> <mo>,</mo> </mrow> </mtd> </mtr> <mtr> <mtd> <mrow> <msup> <mi>D</mi> <mi>R</mi> </msup> <mrow> <mo>(</mo> <mi>n</mi> <mo>)</mo> </mrow> <mo>,</mo> <mi>i</mi> <mi>f</mi> <mi> </mi> <msup> <mi>L</mi> <mi>C</mi> </msup> <mrow> <mo>(</mo> <mi>n</mi> <mo>)</mo> </mrow> <mo>&le;</mo> <msup> <mi>L</mi> <mi>R</mi> </msup> <mrow> <mo>(</mo> <mi>n</mi> <mo>)</mo> </mrow> <mi>a</mi> <mi>n</mi> <mi>d</mi> <mfrac> <mrow> <mo>|</mo> <msup> <mi>L</mi> <mi>C</mi> </msup> <mrow> <mo>(</mo> <mi>n</mi> <mo>)</mo> </mrow> <mo>-</mo> <msup> <mi>L</mi> <mi>R</mi> </msup> <mrow> <mo>(</mo> <mi>n</mi> <mo>)</mo> </mrow> <mo>|</mo> </mrow> <mrow> <mo>|</mo> <msup> <mi>L</mi> <mi>C</mi> </msup> <mrow> <mo>(</mo> <mi>n</mi> <mo>)</mo> </mrow> <mo>+</mo> <msup> <mi>L</mi> <mi>R</mi> </msup> <mrow> <mo>(</mo> <mi>n</mi> <mo>)</mo> </mrow> <mo>|</mo> </mrow> </mfrac> <mo>&GreaterEqual;</mo> <mi>&lambda;</mi> <mo>,</mo> </mrow> </mtd> </mtr> <mtr> <mtd> <mrow> <mo>(</mo> <msup> <mi>D</mi> <mi>C</mi> </msup> <mo>(</mo> <mi>n</mi> <mo>)</mo> <mo>+</mo> <msup> <mi>D</mi> <mi>R</mi> </msup> <mo>(</mo> <mi>n</mi> <mo>)</mo> <mo>)</mo> <mo>/</mo> <mn>2</mn> <mo>,</mo> <mi>o</mi> <mi>t</mi> <mi>h</mi> <mi>e</mi> <mi>r</mi> <mi>w</mi> <mi>i</mi> <mi>s</mi> <mi>e</mi> </mrow> </mtd> </mtr> </mtable> </mfenced> </mrow>
where λ is 0.25.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201710259812.1A CN107194912B (en) | 2017-04-20 | 2017-04-20 | Brain CT/MR image fusion method based on sparse representation and improved coupled dictionary learning |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201710259812.1A CN107194912B (en) | 2017-04-20 | 2017-04-20 | Brain CT/MR image fusion method based on sparse representation and improved coupled dictionary learning |
Publications (2)
Publication Number | Publication Date |
---|---|
CN107194912A true CN107194912A (en) | 2017-09-22 |
CN107194912B CN107194912B (en) | 2020-12-29 |
Family
ID=59871779
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201710259812.1A Active CN107194912B (en) | 2017-04-20 | 2017-04-20 | Brain CT/MR image fusion method based on sparse representation and improved coupled dictionary learning |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN107194912B (en) |
Cited By (10)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN107680072A (en) * | 2017-11-01 | 2018-02-09 | 淮海工学院 | It is a kind of based on the positron emission fault image of depth rarefaction representation and the fusion method of MRI |
CN108428225A (en) * | 2018-01-30 | 2018-08-21 | 李家菊 | Image department brain image fusion identification method based on multiple dimensioned multiple features |
CN108846430A (en) * | 2018-05-31 | 2018-11-20 | 兰州理工大学 | A kind of sparse representation method of the picture signal based on polyatom dictionary |
CN109461140A (en) * | 2018-09-29 | 2019-03-12 | 沈阳东软医疗系统有限公司 | Image processing method and device, equipment and storage medium |
CN109946076A (en) * | 2019-01-25 | 2019-06-28 | 西安交通大学 | A kind of planet wheel bearing fault identification method of weighted multiscale dictionary learning frame |
CN109998599A (en) * | 2019-03-07 | 2019-07-12 | 华中科技大学 | A kind of light based on AI technology/sound double-mode imaging fundus oculi disease diagnostic system |
CN110443248A (en) * | 2019-06-26 | 2019-11-12 | 武汉大学 | Substantially remote sensing image semantic segmentation block effect removing method and system |
WO2020223865A1 (en) * | 2019-05-06 | 2020-11-12 | 深圳先进技术研究院 | Ct image reconstruction method, device, storage medium, and computer apparatus |
CN114428873A (en) * | 2022-04-07 | 2022-05-03 | 源利腾达(西安)科技有限公司 | Thoracic surgery examination data sorting method |
CN117877686A (en) * | 2024-03-13 | 2024-04-12 | 自贡市第一人民医院 | Intelligent management method and system for traditional Chinese medicine nursing data |
Citations (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN104182954A (en) * | 2014-08-27 | 2014-12-03 | 中国科学技术大学 | Real-time multi-modal medical image fusion method |
CN104376565A (en) * | 2014-11-26 | 2015-02-25 | 西安电子科技大学 | Non-reference image quality evaluation method based on discrete cosine transform and sparse representation |
-
2017
- 2017-04-20 CN CN201710259812.1A patent/CN107194912B/en active Active
Patent Citations (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN104182954A (en) * | 2014-08-27 | 2014-12-03 | 中国科学技术大学 | Real-time multi-modal medical image fusion method |
CN104376565A (en) * | 2014-11-26 | 2015-02-25 | 西安电子科技大学 | Non-reference image quality evaluation method based on discrete cosine transform and sparse representation |
Non-Patent Citations (2)
Title |
---|
宗静静等: "联合稀疏表示的医学图像融合及同步去噪", 《中国生物》 * |
李超等: "基于非下采样Contourlet变换和区域特征的医学图像融合", 《计算机应用》 * |
Cited By (14)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN107680072A (en) * | 2017-11-01 | 2018-02-09 | 淮海工学院 | It is a kind of based on the positron emission fault image of depth rarefaction representation and the fusion method of MRI |
CN108428225A (en) * | 2018-01-30 | 2018-08-21 | 李家菊 | Image department brain image fusion identification method based on multiple dimensioned multiple features |
CN108846430B (en) * | 2018-05-31 | 2022-02-22 | 兰州理工大学 | Image signal sparse representation method based on multi-atom dictionary |
CN108846430A (en) * | 2018-05-31 | 2018-11-20 | 兰州理工大学 | A kind of sparse representation method of the picture signal based on polyatom dictionary |
CN109461140A (en) * | 2018-09-29 | 2019-03-12 | 沈阳东软医疗系统有限公司 | Image processing method and device, equipment and storage medium |
CN109946076A (en) * | 2019-01-25 | 2019-06-28 | 西安交通大学 | A kind of planet wheel bearing fault identification method of weighted multiscale dictionary learning frame |
CN109946076B (en) * | 2019-01-25 | 2020-04-28 | 西安交通大学 | Planetary wheel bearing fault identification method of weighted multi-scale dictionary learning framework |
CN109998599A (en) * | 2019-03-07 | 2019-07-12 | 华中科技大学 | A kind of light based on AI technology/sound double-mode imaging fundus oculi disease diagnostic system |
WO2020223865A1 (en) * | 2019-05-06 | 2020-11-12 | 深圳先进技术研究院 | Ct image reconstruction method, device, storage medium, and computer apparatus |
CN110443248A (en) * | 2019-06-26 | 2019-11-12 | 武汉大学 | Substantially remote sensing image semantic segmentation block effect removing method and system |
CN110443248B (en) * | 2019-06-26 | 2021-12-03 | 武汉大学 | Method and system for eliminating semantic segmentation blocking effect of large-amplitude remote sensing image |
CN114428873A (en) * | 2022-04-07 | 2022-05-03 | 源利腾达(西安)科技有限公司 | Thoracic surgery examination data sorting method |
CN117877686A (en) * | 2024-03-13 | 2024-04-12 | 自贡市第一人民医院 | Intelligent management method and system for traditional Chinese medicine nursing data |
CN117877686B (en) * | 2024-03-13 | 2024-05-07 | 自贡市第一人民医院 | Intelligent management method and system for traditional Chinese medicine nursing data |
Also Published As
Publication number | Publication date |
---|---|
CN107194912B (en) | 2020-12-29 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN107194912B (en) | Brain CT/MR image fusion method based on sparse representation and improved coupled dictionary learning | |
CN110348515B (en) | Image classification method, image classification model training method and device | |
CN104933683B (en) | A kind of non-convex low-rank method for reconstructing for magnetic resonance fast imaging | |
Hu et al. | Multi-modality medical image fusion based on separable dictionary learning and Gabor filtering | |
CN110097512A (en) | Construction method and the application of the three-dimensional MRI image denoising model of confrontation network are generated based on Wasserstein | |
CN110047138A (en) | A kind of magnetic resonance thin layer image rebuilding method | |
CN104182954B (en) | Real-time multi-modal medical image fusion method | |
Chen et al. | A novel medical image fusion method based on rolling guidance filtering | |
CN104156994A (en) | Compressed sensing magnetic resonance imaging reconstruction method | |
Lin et al. | BATFormer: Towards boundary-aware lightweight transformer for efficient medical image segmentation | |
Aghabiglou et al. | Projection-Based cascaded U-Net model for MR image reconstruction | |
CN107301630B (en) | CS-MRI image reconstruction method based on ordering structure group non-convex constraint | |
CN109214989A (en) | Single image super resolution ratio reconstruction method based on Orientation Features prediction priori | |
CN111487573B (en) | Enhanced residual error cascade network model for magnetic resonance undersampling imaging | |
CN114331849B (en) | Cross-mode nuclear magnetic resonance hyper-resolution network and image super-resolution method | |
CN117333750A (en) | Spatial registration and local global multi-scale multi-modal medical image fusion method | |
CN117036162B (en) | Residual feature attention fusion method for super-resolution of lightweight chest CT image | |
CN115018728A (en) | Image fusion method and system based on multi-scale transformation and convolution sparse representation | |
CN115457359A (en) | PET-MRI image fusion method based on adaptive countermeasure generation network | |
Chan et al. | An attention-based deep convolutional neural network for ultra-sparse-view CT reconstruction | |
CN116630964A (en) | Food image segmentation method based on discrete wavelet attention network | |
CN116309754A (en) | Brain medical image registration method and system based on local-global information collaboration | |
Barbano et al. | Steerable conditional diffusion for out-of-distribution adaptation in imaging inverse problems | |
CN115830016A (en) | Medical image registration model training method and equipment | |
Yang et al. | MGDUN: An interpretable network for multi-contrast MRI image super-resolution reconstruction |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |