[go: up one dir, main page]
More Web Proxy on the site http://driver.im/
You seem to have javascript disabled. Please note that many of the page functionalities won't work as expected without javascript enabled.
 
 

Novel MRI Techniques and Biomedical Image Processing

A special issue of Bioengineering (ISSN 2306-5354). This special issue belongs to the section "Biosignal Processing".

Deadline for manuscript submissions: closed (29 February 2024) | Viewed by 21990

Special Issue Editor

Department of Bioengineering, University of California Riverside, Riverside, CA 92521, USA,
Interests: MRI; perfusion imaging; arterial spin labeling; machine learning; image processing

Special Issue Information

Dear Colleagues,

Since the first picture of Magnetic Resonance Imaging published in 1973 by Lauterbur, 50 years have passed witnessing numerous important technical breakthroughs made possible by pioneers and researchers in this field. MRI has been widely used in various research and clinical applications, and is continuing to advance thanks to the effort from researchers and clinicians. At the same time, more advanced analyzing tools have become available and been adopted by research communities to help better understand and interpret the ever-increasing amount of image data.

This Special Issue on Novel MRI Techniques and Biomedical Image Processing welcomes original research papers and comprehensive reviews with a focus on two important aspects in biomedical imaging: 1) MR image generation; and 2) image processing. The image generation line includes, but is not limited to, novel MRI contrast mechanisms, acquisition and reconstruction methods; while the image processing includes processing and understanding image data obtained from a wide range of imaging modalities, such as CT, nuclear medicine, optical imaging, etc. One particular area of interest is the application of machine (deep) learning-based methods in MR image generation and biomedical image processing in general.

Dr. Jia Guo
Guest Editor

Manuscript Submission Information

Manuscripts should be submitted online at www.mdpi.com by registering and logging in to this website. Once you are registered, click here to go to the submission form. Manuscripts can be submitted until the deadline. All submissions that pass pre-check are peer-reviewed. Accepted papers will be published continuously in the journal (as soon as accepted) and will be listed together on the special issue website. Research articles, review articles as well as short communications are invited. For planned papers, a title and short abstract (about 100 words) can be sent to the Editorial Office for announcement on this website.

Submitted manuscripts should not have been published previously, nor be under consideration for publication elsewhere (except conference proceedings papers). All manuscripts are thoroughly refereed through a single-blind peer-review process. A guide for authors and other relevant information for submission of manuscripts is available on the Instructions for Authors page. Bioengineering is an international peer-reviewed open access monthly journal published by MDPI.

Please visit the Instructions for Authors page before submitting a manuscript. The Article Processing Charge (APC) for publication in this open access journal is 2700 CHF (Swiss Francs). Submitted papers should be well formatted and use good English. Authors may use MDPI's English editing service prior to publication or during author revisions.

Benefits of Publishing in a Special Issue

  • Ease of navigation: Grouping papers by topic helps scholars navigate broad scope journals more efficiently.
  • Greater discoverability: Special Issues support the reach and impact of scientific research. Articles in Special Issues are more discoverable and cited more frequently.
  • Expansion of research network: Special Issues facilitate connections among authors, fostering scientific collaborations.
  • External promotion: Articles in Special Issues are often promoted through the journal's social media, increasing their visibility.
  • e-Book format: Special Issues with more than 10 articles can be published as dedicated e-books, ensuring wide and rapid dissemination.

Further information on MDPI's Special Issue polices can be found here.

Published Papers (10 papers)

Order results
Result details
Select all
Export citation of selected articles as:

Research

Jump to: Review, Other

20 pages, 4364 KiB  
Article
3D Quantitative-Amplified Magnetic Resonance Imaging (3D q-aMRI)
by Itamar Terem, Kyan Younes, Nan Wang, Paul Condron, Javid Abderezaei, Haribalan Kumar, Hillary Vossler, Eryn Kwon, Mehmet Kurt, Elizabeth Mormino, Samantha Holdsworth and Kawin Setsompop
Bioengineering 2024, 11(8), 851; https://doi.org/10.3390/bioengineering11080851 - 20 Aug 2024
Cited by 1 | Viewed by 1623
Abstract
Amplified MRI (aMRI) is a promising new technique that can visualize pulsatile brain tissue motion by amplifying sub-voxel motion in cine MRI data, but it lacks the ability to quantify the sub-voxel motion field in physical units. Here, we introduce a novel post-processing [...] Read more.
Amplified MRI (aMRI) is a promising new technique that can visualize pulsatile brain tissue motion by amplifying sub-voxel motion in cine MRI data, but it lacks the ability to quantify the sub-voxel motion field in physical units. Here, we introduce a novel post-processing algorithm called 3D quantitative amplified MRI (3D q-aMRI). This algorithm enables the visualization and quantification of pulsatile brain motion. 3D q-aMRI was validated and optimized on a 3D digital phantom and was applied in vivo on healthy volunteers for its ability to accurately measure brain parenchyma and CSF voxel displacement. Simulation results show that 3D q-aMRI can accurately quantify sub-voxel motions in the order of 0.01 of a voxel size. The algorithm hyperparameters were optimized and tested on in vivo data. The repeatability and reproducibility of 3D q-aMRI were shown on six healthy volunteers. The voxel displacement field extracted by 3D q-aMRI is highly correlated with the displacement measurements estimated by phase contrast (PC) MRI. In addition, the voxel displacement profile through the cerebral aqueduct resembled the CSF flow profile reported in previous literature. Differences in brain motion was observed in patients with dementia compared with age-matched healthy controls. In summary, 3D q-aMRI is a promising new technique that can both visualize and quantify pulsatile brain motion. Its ability to accurately quantify sub-voxel motion in physical units holds potential for the assessment of pulsatile brain motion as well as the indirect assessment of CSF homeostasis. While further research is warranted, 3D q-aMRI may provide important diagnostic information for neurological disorders such as Alzheimer’s disease. Full article
(This article belongs to the Special Issue Novel MRI Techniques and Biomedical Image Processing)
Show Figures

Figure 1

Figure 1
<p>The 3D q-aMRI algorithm pipeline begins with the decomposition of volumetric cine MRI using the 3D complex steerable pyramid. This process separates the images into various scales and orientations, isolating different spatial frequency components. The decomposed images are then split into amplitude and phase components, with the phases encoding information about sub-voxel motion. Next, the phase components are temporally filtered at each spatial location, orientation, and scale to enhance significant temporal changes. These filtered phases are split and proceed along two paths: the original amplification path for visualization and the quantification path for generating voxel displacement maps. For quantitative estimation, the data undergo the estimation of the spatial phase derivative. This involves estimating the spatial phase derivative from the decomposed image. The voxel displacement field is then calculated by solving a least squares optimization objective. This formula calculates the best-fit voxel displacement field that aligns the phase derivatives with the phase temporal changes. The color-coded images display the estimated voxel displacements in the axial (L/R direction, white arrow), sagittal (S/I direction, white arrow), and coronal (S/I direction, white arrow) planes. The plus sign indicates the positive direction of motion.</p>
Full article ">Figure 2
<p>Validation of 3D q-aMRI on a 3D cylinder phantom (initial height <math display="inline"><semantics> <msub> <mi>h</mi> <mn>0</mn> </msub> </semantics></math> and <math display="inline"><semantics> <msub> <mi>r</mi> <mn>0</mn> </msub> </semantics></math> radius) that undergoes cyclic tension and compression. (<b>a</b>) The phantom at reference time <math display="inline"><semantics> <msub> <mi>t</mi> <mn>0</mn> </msub> </semantics></math> and deformation time <math display="inline"><semantics> <msub> <mi>t</mi> <mi>i</mi> </msub> </semantics></math>. (<b>b</b>) Error as a function of displacement in the absence of noise.</p>
Full article ">Figure 3
<p><span class="html-italic">In vivo</span> validation of the 3D q-aMRI against the observed signal in 3D aMRI. The 4D cine data are amplified by 3D aMRI. In addition, the first volume in the cine data is warped by an amplified version of the estimated motion field, and normalized temporal variance maps are calculated for both amplified movies. The maps suggest that 3D q-aMRI quantification output matches the motion observed qualitatively in 3D aMRI.</p>
Full article ">Figure 4
<p>Normalized temporal standard deviation maps of the amplified videos for different pyramid levels. The data were amplified with an amplification parameter of 30 with a Gaussian window with <math display="inline"><semantics> <mrow> <mi>σ</mi> <mo>=</mo> <mn>5</mn> </mrow> </semantics></math>. Coherent motion exists mainly in the first two levels of the steerable pyramid.</p>
Full article ">Figure 5
<p>Normalized temporal standard deviation maps of the amplified videos for different temporal frequency bands. The data were amplified with an amplification parameter of 30 with a Gaussian window with <math display="inline"><semantics> <mrow> <mi>σ</mi> <mo>=</mo> <mn>5</mn> </mrow> </semantics></math>. Motion was extracted using the first two levels of the steerable pyramid. Coherent motion exists mainly in the one to four heart rate harmonics band.</p>
Full article ">Figure 6
<p>The pulsatile brain motion in the sagittal (S/I direction, indicated by a white arrow) and axial (L/R direction, indicated by a white arrow) for different standard deviation sizes of the Gaussian window. The Gaussian smoothing reduces the noise level in the estimated motion field. For standard deviations larger than <math display="inline"><semantics> <mrow> <mi>σ</mi> <mo>=</mo> <mn>5</mn> </mrow> </semantics></math>, the estimated motion field is smooth and generally remains constant.</p>
Full article ">Figure 7
<p>The pulsatile brain motion in the sagittal (S/I direction, indicated by a white arrow) and axial (L/R direction, indicated by a white arrow) directions for different isotropic spatial resolutions. Plus sign represent the positive direction of motion. As can be seen, the algorithm can robustly estimate the motion field for different image resolutions (up to 1.8 mm isotropic voxel size). Note that the dark blue/red regions (red arrows) in the sagittal plane point to the basilar artery, which exhibits apparent motion (larger than 1.5 pixels).</p>
Full article ">Figure 8
<p>(<b>a</b>) Comparison between PC-MRI (top) and 3D q-aMRI (bottom) for sagittal (S/I direction), coronal (S/I direction), and axial (L/R direction) planes. The estimated field captures the relative brain tissue deformation over time and the physical change in shape of the ventricles by the relative movement of the surrounding tissues. (<b>b</b>) The extracted flow/motion profile through the cerebral aqueduct as extracted by 3D q-aMRI (left), which is comparable to that reported by [<a href="#B46-bioengineering-11-00851" class="html-bibr">46</a>] as shown in the inset (right). Note that [<a href="#B46-bioengineering-11-00851" class="html-bibr">46</a>] the graph seen here is normalized, but the actual CSF flow values reported were an order of magnitude higher than the 3D q-aMRI flow profile.</p>
Full article ">Figure 9
<p>(<b>a</b>) The average (over different brain regions) voxel displacement profile for two subjects (<math display="inline"><semantics> <msub> <mi>S</mi> <mn>1</mn> </msub> </semantics></math> and <math display="inline"><semantics> <msub> <mi>S</mi> <mn>3</mn> </msub> </semantics></math>) in the S/I direction for eight scans (<math display="inline"><semantics> <msub> <mi>t</mi> <mn>0</mn> </msub> </semantics></math> to <math display="inline"><semantics> <msub> <mi>t</mi> <mn>7</mn> </msub> </semantics></math>). Top—the brain regions (lateral ventricles, 3rd ventricle, 4th ventricle, brainstem, and cerebellum) where the average voxel displacement was estimated. Bottom—the first two columns depict the voxel displacement profile for all scans, for each of the two subjects. The black line represents the average motion over all scans, together with an error bar (95% confidence interval). The last column depicts the average motion for all six subjects, along with error bars representing the 95% confidence interval. The results indicate high repeatability across the time points within each subject, with similar motion patterns but different magnitudes across all subjects. (<b>b</b>) The boxplots for each brain region and the Intraclass Correlation Coefficient (ICC) of the dynamic time warping (DTW) distance. The plus sign denotes an outlier.</p>
Full article ">Figure 10
<p>Depicts diffuse reduction in brain bulk displacement on both the sagittal (S/I direction, white arrow) and axial (L/R direction, white arrow) planes for elderly adults with MCI due to dementia (70-year-old female) compared to an elderly control (74-year-old female) plus sign represent the positive direction of motion. In addition, loss of symmetry and irregular lateral motion of the lateral ventricles are seen in the displacement maps.</p>
Full article ">
18 pages, 5098 KiB  
Article
Evaluating Machine Learning-Based MRI Reconstruction Using Digital Image Quality Phantoms
by Fei Tan, Jana G. Delfino and Rongping Zeng
Bioengineering 2024, 11(6), 614; https://doi.org/10.3390/bioengineering11060614 - 15 Jun 2024
Viewed by 1754
Abstract
Quantitative and objective evaluation tools are essential for assessing the performance of machine learning (ML)-based magnetic resonance imaging (MRI) reconstruction methods. However, the commonly used fidelity metrics, such as mean squared error (MSE), structural similarity (SSIM), and peak signal-to-noise ratio (PSNR), often fail [...] Read more.
Quantitative and objective evaluation tools are essential for assessing the performance of machine learning (ML)-based magnetic resonance imaging (MRI) reconstruction methods. However, the commonly used fidelity metrics, such as mean squared error (MSE), structural similarity (SSIM), and peak signal-to-noise ratio (PSNR), often fail to capture fundamental and clinically relevant MR image quality aspects. To address this, we propose evaluation of ML-based MRI reconstruction using digital image quality phantoms and automated evaluation methods. Our phantoms are based upon the American College of Radiology (ACR) large physical phantom but created in k-space to simulate their MR images, and they can vary in object size, signal-to-noise ratio, resolution, and image contrast. Our evaluation pipeline incorporates evaluation metrics of geometric accuracy, intensity uniformity, percentage ghosting, sharpness, signal-to-noise ratio, resolution, and low-contrast detectability. We demonstrate the utility of our proposed pipeline by assessing an example ML-based reconstruction model across various training and testing scenarios. The performance results indicate that training data acquired with a lower undersampling factor and coils of larger anatomical coverage yield a better performing model. The comprehensive and standardized pipeline introduced in this study can help to facilitate a better understanding of the performance and guide future development and advancement of ML-based reconstruction algorithms. Full article
(This article belongs to the Special Issue Novel MRI Techniques and Biomedical Image Processing)
Show Figures

Figure 1

Figure 1
<p>Pipeline for creating (<b>a</b>) simple disk phantom and (<b>b</b>) compound phantoms, including a resolution phantom and a low-contrast phantom. All of the phantoms are created in k-space (2D frequency domain) using its mathematical definition and Fourier theorem. The phantoms in image space are calculated by simple inverse Fast Fourier transform (iFFT).</p>
Full article ">Figure 2
<p>Illustration of image quality evaluation process. (<b>a</b>) Geometry accuracy is calculated by the maximum percentage radius error. (<b>b</b>) Intensity uniformity is defined by the percentage intensity uniformity within the large ROI of the disk. (<b>c</b>) Percentage ghosting is calculated using the average intensity of four background ROIs and a center foreground ROI. (<b>d</b>) Sharpness is measured by the full-width-half-maximum of the edge spread function. (<b>e</b>) SNR is calculated as the mean intensity divided by the standard deviation of the noise adjusted by a factor of <math display="inline"><semantics> <mrow> <msqrt> <mn>2</mn> </msqrt> </mrow> </semantics></math>. (<b>f</b>) Resolution is evaluated by peak separability. (<b>g</b>) Low-contrast detectability is quantified by the number of completely detected spokes. The red dots illustrate detected low-contrast disk locations using a template matching method.</p>
Full article ">Figure 3
<p>Example MR reconstruction network: AUTOMAP. Networks to estimate real and imaginary image components were trained separately. Both networks used the same hyperparameters as the original AUTOMAP structure. Distinct AUTOMAP networks were trained with M4Raw fully sampled and undersampled data and FastMRI fully sampled and undersampled data. Note that the digital phantom dataset was not included in training. The arrows suggest the flow of data.</p>
Full article ">Figure 4
<p>Representative reconstructed phantom images using M4Raw-trained AUTOMAP networks. Reference phantom image (iFFT), AUTOMAP reconstructed images, and the corresponding residuals from fully sampled (AUTOMAP 1×)- and undersampled data (AUTOMAP 2×)-trained networks, for test set SNR levels of 12.5 and 25, are displayed. Resolution phantoms were enlarged 4 times and residual image intensity was multiplied by 5 for better visualization.</p>
Full article ">Figure 5
<p>Boxplots of the phantom-based evaluation results across all phantoms for reference image, M4Raw fully sampled k-space-trained network (m4raw 1×), and undersampled k-space-trained network (m4raw 2×): (<b>a</b>) geometric accuracy; (<b>b</b>) intensity uniformity; (<b>c</b>) percentage ghosting; (<b>d</b>) sharpness; (<b>e</b>) signal-to-noise ratio; (<b>f</b>) high-contrast resolution (note that high-contrast resolution in horizontal <span class="html-italic">x</span>-axis (left), and vertical <span class="html-italic">y</span>-axis (right) are plotted separately); and (<b>g</b>) low-contrast detectability. For each boxplot, the center line indicates the median, the box extends from the 1st quartile to the 3rd quartile, the whiskers reach 1.5 times the interquartile range away from the box, and circles indicate outliers.</p>
Full article ">Figure 6
<p>Boxplots of the phantom-based evaluation results across all phantoms reconstructed from the M4Raw fully sampled k-space-trained network at two SNR levels of 12.5 and 25: (<b>a</b>) geometric accuracy; (<b>b</b>) intensity uniformity; (<b>c</b>) percentage ghosting; (<b>d</b>) sharpness; (<b>e</b>) signal-to-noise ratio; (<b>f</b>) high-contrast resolution (note that high-contrast resolution in horizontal <span class="html-italic">x</span>-axis (left), and vertical <span class="html-italic">y</span>-axis (right) are plotted separately); and (<b>g</b>) low-contrast detectability. For each boxplot, the center line indicates the median, the box extends from the 1st quartile to the 3rd quartile, the whiskers reach 1.5 times the interquartile range away from the box, and circles indicate outliers.</p>
Full article ">Figure 7
<p>Boxplots of the phantom-based evaluation results across all phantoms for reference image, M4Raw fully sampled k-space-trained network, and FastMRI brain fully sampled k-space-trained network: (<b>a</b>) geometric accuracy; (<b>b</b>) intensity uniformity; (<b>c</b>) percentage ghosting; (<b>d</b>) sharpness; (<b>e</b>) signal-to-noise ratio; (<b>f</b>) high-contrast resolution (note that high-contrast resolution in horizontal x-axis (left), and vertical y-axis (right) are plotted separately; and (<b>g</b>) low-contrast detectability. For each boxplot, the center line indicates the median, the box extends from the 1st quartile to the 3rd quartile, the whiskers reach 1.5 times the interquartile range away from the box, and circles indicate outliers.</p>
Full article ">Figure 8
<p>Example reconstructed brain images and conventional evaluation metrics (SSIM, PSNR, and RMSE) for networks trained with fully sampled and undersampled M4Raw and FastMRI datasets. (<b>a</b>) Reconstructed images from M4Raw brain test set using the model trained on M4Raw data; (<b>b</b>) Boxplots showing SSIM, PSNR, and RMSE results for M4Raw-trained model; (<b>c</b>) Reconstructed images from the FastMRI brain test set using the model trained on FastMRI data; (<b>d</b>) Boxplots showing SSIM, PSNR, and RMSE results for the FastMRI-trained model. For each boxplot, the center line indicates the median, the box extends from the 1st quartile to the 3rd quartile, the whiskers reach 1.5 times the interquartile range away from the box, and circles indicate outliers.</p>
Full article ">
30 pages, 10517 KiB  
Article
Ultra-High Contrast MRI: Using Divided Subtracted Inversion Recovery (dSIR) and Divided Echo Subtraction (dES) Sequences to Study the Brain and Musculoskeletal System
by Daniel Cornfeld, Paul Condron, Gil Newburn, Josh McGeown, Miriam Scadeng, Mark Bydder, Mark Griffin, Geoffrey Handsfield, Meeghage Randika Perera, Tracy Melzer, Samantha Holdsworth, Eryn Kwon and Graeme Bydder
Bioengineering 2024, 11(5), 441; https://doi.org/10.3390/bioengineering11050441 - 29 Apr 2024
Cited by 2 | Viewed by 1525
Abstract
Divided and subtracted MRI is a novel imaging processing technique, where the difference of two images is divided by their sum. When the sequence parameters are chosen properly, this results in images with a high T1 or T2 weighting over a [...] Read more.
Divided and subtracted MRI is a novel imaging processing technique, where the difference of two images is divided by their sum. When the sequence parameters are chosen properly, this results in images with a high T1 or T2 weighting over a small range of tissues with specific T1 and T2 values. In the T1 domain, we describe the implementation of the divided Subtracted Inversion Recovery Sequence (dSIR), which is used to image very small changes in T1 from normal in white matter. dSIR has shown widespread changes in otherwise normal-appearing white matter in patients suffering from mild traumatic brain injury (mTBI), substance abuse, and ischemic leukoencephalopathy. It can also be targeted to measure small changes in T1 from normal in other tissues. In the T2 domain, we describe the divided echo subtraction (dES) sequence that is used to image musculoskeletal tissues with a very short T2*. These tissues include fascia, tendons, and aponeuroses. In this manuscript, we explain how this contrast is generated, review how these techniques are used in our research, and discuss the current challenges and limitations of this technique. Full article
(This article belongs to the Special Issue Novel MRI Techniques and Biomedical Image Processing)
Show Figures

Figure 1

Figure 1
<p>Coronal zTE image of the knee displayed with an inverted gray scale. Cortical bone (red arrow) is bright. Other short T<sub>2</sub> tissues, such as the medial collateral ligament (black arrow) and the menisci (yellow arrows), are also bright, but less so than the cortical bone.</p>
Full article ">Figure 2
<p>Tissue property filters for a T<sub>1</sub>-weighted image. (<b>a</b>) The proton density (<math display="inline"><semantics> <mrow> <msub> <mrow> <mi>ρ</mi> </mrow> <mrow> <mi>m</mi> </mrow> </msub> </mrow> </semantics></math>) filter for the fast spin echo sequence shows the signal due to r<sub>m</sub> in an image. (<b>b</b>) The T<sub>1</sub> filter for the fast spin echo sequence, with TR = 700 ms, shows the signal due to T<sub>1</sub> in an image. Most tissues sit on the steep part of the curve, which results in different signals from different tissues based on T<sub>1</sub> weighting. (<b>c</b>) The T<sub>2</sub> filter for the fast spin echo sequence, with TE = 10 ms, shows the signal due to T<sub>2</sub> in an image. Most tissues sit on the flat part of the curve, which results in little contrast between tissues based on T<sub>2</sub> weighting. Muscle and tendon are the exception. Changes in tendon T<sub>2</sub> are easily visualized on fast spin echo sequences, with TE = 10 ms, because the curve is steep at the T<sub>2</sub> of tendon.</p>
Full article ">Figure 2 Cont.
<p>Tissue property filters for a T<sub>1</sub>-weighted image. (<b>a</b>) The proton density (<math display="inline"><semantics> <mrow> <msub> <mrow> <mi>ρ</mi> </mrow> <mrow> <mi>m</mi> </mrow> </msub> </mrow> </semantics></math>) filter for the fast spin echo sequence shows the signal due to r<sub>m</sub> in an image. (<b>b</b>) The T<sub>1</sub> filter for the fast spin echo sequence, with TR = 700 ms, shows the signal due to T<sub>1</sub> in an image. Most tissues sit on the steep part of the curve, which results in different signals from different tissues based on T<sub>1</sub> weighting. (<b>c</b>) The T<sub>2</sub> filter for the fast spin echo sequence, with TE = 10 ms, shows the signal due to T<sub>2</sub> in an image. Most tissues sit on the flat part of the curve, which results in little contrast between tissues based on T<sub>2</sub> weighting. Muscle and tendon are the exception. Changes in tendon T<sub>2</sub> are easily visualized on fast spin echo sequences, with TE = 10 ms, because the curve is steep at the T<sub>2</sub> of tendon.</p>
Full article ">Figure 3
<p>Tissue property filters for a T<sub>2</sub>-weighted image. (<b>a</b>) The proton density (<math display="inline"><semantics> <mrow> <msub> <mrow> <mi>ρ</mi> </mrow> <mrow> <mi>m</mi> </mrow> </msub> </mrow> </semantics></math>) filter for the fast spin echo sequence shows the signal due to r<sub>m</sub> in an image. (<b>b</b>) The T<sub>1</sub> filter for the fast spin echo sequence, with TR = 5000 ms, shows the signal due to T<sub>1</sub> in an image. Most tissues sit on the flat part of the curve, which results in little signal difference between different tissues. (<b>c</b>) The T<sub>2</sub> filter for the fast spin echo sequence, with TE = 100 ms, shows the signal due to T<sub>2</sub> in an image. Most tissues sit on the steep part of the curve, which results in a contrast between tissues based on T<sub>2</sub> weighting.</p>
Full article ">Figure 3 Cont.
<p>Tissue property filters for a T<sub>2</sub>-weighted image. (<b>a</b>) The proton density (<math display="inline"><semantics> <mrow> <msub> <mrow> <mi>ρ</mi> </mrow> <mrow> <mi>m</mi> </mrow> </msub> </mrow> </semantics></math>) filter for the fast spin echo sequence shows the signal due to r<sub>m</sub> in an image. (<b>b</b>) The T<sub>1</sub> filter for the fast spin echo sequence, with TR = 5000 ms, shows the signal due to T<sub>1</sub> in an image. Most tissues sit on the flat part of the curve, which results in little signal difference between different tissues. (<b>c</b>) The T<sub>2</sub> filter for the fast spin echo sequence, with TE = 100 ms, shows the signal due to T<sub>2</sub> in an image. Most tissues sit on the steep part of the curve, which results in a contrast between tissues based on T<sub>2</sub> weighting.</p>
Full article ">Figure 4
<p>Sagittal fast spin echo image of the brain with TR = 700 ms and TE = 10 ms. Gray matter (between the red arrows) is darker than the subcortical white matter (black asterix) and corpus callosum (yellow arrow). Fluid (white asterix) is even darker. This parallels the curve in <a href="#bioengineering-11-00441-f002" class="html-fig">Figure 2</a>b.</p>
Full article ">Figure 5
<p>Inversion recovery T<sub>1</sub> filter. (<b>a</b>) T<sub>1</sub> tissue property filter for the IR sequence with TI = 1100 ms and TR = 5000 ms. Tissues with T<sub>1</sub> = 1594 ms are nulled by these parameters. The slope of the curve in the region of most tissues of interest is negative, so that an increase in T<sub>1</sub> results in a decreased signal. This produces an image like an SE T<sub>1</sub>-weighted image. The slope of the left half of the filter is steeper than the slope of the plot in <a href="#bioengineering-11-00441-f002" class="html-fig">Figure 2</a>b, resulting in increased contrast. (<b>b</b>) Sagittal fast spin echo image of the brain with TR = 5000 ms and TE = 1000 ms. Gray matter (between the red arrows) is darker than the subcortical white matter (black asterix) and corpus callosum (yellow arrow). Fluid (white asterix) is even darker. This parallels the curve in (<b>a</b>). There is increased contrast compared to <a href="#bioengineering-11-00441-f004" class="html-fig">Figure 4</a>, which reflects the larger difference in signal between gray and white matter in (<b>a</b>) than in <a href="#bioengineering-11-00441-f002" class="html-fig">Figure 2</a>b.</p>
Full article ">Figure 6
<p>Subtracted inversion recovery (SIR) sequence. (<b>a</b>) T<sub>1</sub> tissue property filter for the SIR sequence. The red curve uses a TI<sub>short</sub> designed to null white matter. The blue curve uses a TI<sub>long</sub> designed to null gray matter. The middle domain (mD) is the range of T<sub>1</sub>s between the tissues nulled by TI<sub>short</sub> and Ti<sub>long</sub>. The green curve is the tissue filter for the SIR sequence and is the blue curve subtracted from the red curve. The slope of the green curve at the T<sub>1</sub> of white matter is nearly two times the maximum slope of the red or blue curves. (<b>b</b>) Axial fast spin echo inversion recovery image, with TR = 5000 ms and TI = 580 ms, designed to null white matter. The slope of the T<sub>1</sub> filter to the right of white matter (red curve in (<b>a</b>)) is reversed compared to the filters in 5a and 2b. Increases in T<sub>1</sub> result in increased signal, and gray matter (black asterix) is brighter than white matter (white asterix). Fluid (red asterix) is brighter than gray matter. (<b>c</b>) Axial fast spin echo inversion recovery image, with TR = 5000 ms and TI = 970 ms, designed to null gray matter. The slope of the T<sub>1</sub> filter to the right of the white matter (blue curve in (<b>a</b>)) is the same compared to the filters in <a href="#bioengineering-11-00441-f005" class="html-fig">Figure 5</a>a and <a href="#bioengineering-11-00441-f002" class="html-fig">Figure 2</a>b. Increases in T<sub>1</sub> result in decreased signal, and gray matter (black asterix) is darker than white matter (white asterix). Fluid (red asterix) is brighter than gray matter.</p>
Full article ">Figure 7
<p>Sagittal subtracted inversion recovery (SIR) images in a patient with multiple sclerosis (MS). (<b>a</b>) Sagittal SIR image in an asymptomatic patient with MS presenting for a routine follow-up using a wide mD. TR = 5000. TI<sub>short</sub> = 450 to null white matter. TI<sub>long</sub> = 850 to null gray matter. This is considered a wide mD image. The normal white matter is black. A “Dawson’s finger” (red arrow) is seen as an increased signal. Small plaques (white arrows) are also identified as areas of increased signal. The increased signal is due to increased T<sub>1</sub> in the abnormal white matter. The contrast on the image is described by the green curve in <a href="#bioengineering-11-00441-f006" class="html-fig">Figure 6</a>a. (<b>b</b>) Sagittal T<sub>2</sub>-FLAIR image in the same patient as in (<b>a</b>). The “Dawson’s finger” and small plaques are also seen as areas of increased signal, only the signal is due to increases in white matter T<sub>2</sub>. All the plaques seen on the SIR were also seen on the T<sub>2</sub>-FLAIR. (<b>c</b>) Sagittal inversion recovery fast spin echo T<sub>1</sub> image in the same patient as in (<b>a</b>,<b>b</b>). The Dawson’s finger and small plaques are dark compared to normal white matter, as per the curve shown in <a href="#bioengineering-11-00441-f005" class="html-fig">Figure 5</a>a. This contrast is due to increases in white matter T<sub>1</sub>. Compare the contrast between normal and abnormal white matter with the image in (<b>a</b>). The abnormal white matter is more conspicuous in (<b>a</b>). This is because the maximum slope of the SIR filter (the green curve in <a href="#bioengineering-11-00441-f006" class="html-fig">Figure 6</a>a) is nearly twice that of the IR filter (red and blue curves in <a href="#bioengineering-11-00441-f006" class="html-fig">Figure 6</a>a and blue curve in <a href="#bioengineering-11-00441-f005" class="html-fig">Figure 5</a>a). See <a href="#bioengineering-11-00441-t001" class="html-table">Table 1</a>.</p>
Full article ">Figure 8
<p>Divided Subtracted Inversion Recovery (dSIR) T<sub>1</sub> filter. (<b>a</b>) The red curve is an inversion recovery (IR) sequence, with TI<sub>short</sub> chosen to null white matter. The blue curve is the filter for an IR sequence, with TI<sub>long</sub> chosen to null gray matter. The purple curve is the dSIR filter and is the division of the difference of the blue and red curves by their sum. The slope of the purple curve is 2.7 times the maximum slope of the curve in <a href="#bioengineering-11-00441-f005" class="html-fig">Figure 5</a>a (which is the same as the maximum slope of the red and blue curves). The middle domain is the range of T<sub>1</sub> values between the tissues nulled by TI<sub>short</sub> and TI<sub>long</sub>, which in this case are white and gray matter. (WM—white matter; GM—gray matter; mD—middle domain). (<b>b</b>) T<sub>1</sub> filter for the dSIR sequence with a narrow middle domain compared to the curve shown in (<b>a</b>). As the middle domain decreases, the slope of the purple curve increases. (<b>c</b>) T<sub>1</sub> filter for a dSIR sequence targeted for changes in the T<sub>1</sub> of gray matter. TI<sub>short</sub> is chosen to null signal from gray matter. TI<sub>long</sub> is chosen to be higher. The width of the middle domain determines the sensitivity of the sequence to small changes in T<sub>1</sub>. “WM” marks the T<sub>1</sub> of white matter. “GM” marks the T<sub>1</sub> of gray matter.</p>
Full article ">Figure 8 Cont.
<p>Divided Subtracted Inversion Recovery (dSIR) T<sub>1</sub> filter. (<b>a</b>) The red curve is an inversion recovery (IR) sequence, with TI<sub>short</sub> chosen to null white matter. The blue curve is the filter for an IR sequence, with TI<sub>long</sub> chosen to null gray matter. The purple curve is the dSIR filter and is the division of the difference of the blue and red curves by their sum. The slope of the purple curve is 2.7 times the maximum slope of the curve in <a href="#bioengineering-11-00441-f005" class="html-fig">Figure 5</a>a (which is the same as the maximum slope of the red and blue curves). The middle domain is the range of T<sub>1</sub> values between the tissues nulled by TI<sub>short</sub> and TI<sub>long</sub>, which in this case are white and gray matter. (WM—white matter; GM—gray matter; mD—middle domain). (<b>b</b>) T<sub>1</sub> filter for the dSIR sequence with a narrow middle domain compared to the curve shown in (<b>a</b>). As the middle domain decreases, the slope of the purple curve increases. (<b>c</b>) T<sub>1</sub> filter for a dSIR sequence targeted for changes in the T<sub>1</sub> of gray matter. TI<sub>short</sub> is chosen to null signal from gray matter. TI<sub>long</sub> is chosen to be higher. The width of the middle domain determines the sensitivity of the sequence to small changes in T<sub>1</sub>. “WM” marks the T<sub>1</sub> of white matter. “GM” marks the T<sub>1</sub> of gray matter.</p>
Full article ">Figure 9
<p>Divided Subtracted Inversion Recovery (dSIR) images. (<b>a</b>) Sagittal dSIR image of the same slice and patient as in <a href="#bioengineering-11-00441-f007" class="html-fig">Figure 7</a>a–c. TR = 5000 ms. TI<sub>short</sub> = 450 to null white matter. TI<sub>long</sub> = 850 to null gray matter. TE = 7 ms. This is considered a wide mD image. The normal white matter is black. A “Dawson’s finger” (red arrow) is seen as an increased signal. Small plaques (white arrows) are also identified as areas of increased signal. The increased signal is due to increased T<sub>1</sub> in the abnormal white matter. The contrast on the image is described by the purple curve in <a href="#bioengineering-11-00441-f008" class="html-fig">Figure 8</a>a. The contrast between the normal and abnormal white matter is 2.7 times that of the SIR image in <a href="#bioengineering-11-00441-f005" class="html-fig">Figure 5</a> and 5 times that of the IR image in <a href="#bioengineering-11-00441-f007" class="html-fig">Figure 7</a>c. (<b>b</b>) Axial narrow mD dSIR in a healthy volunteer. TR = 5000. TI<sub>short</sub> = 350. TI<sub>long</sub> = 500. TE = 7 ms, TR = 5000 ms. The normal white matter is black. Normal gray matter is intermediate signal. There is a high signal boundary between the gray and white matter, because the tissue filter (purple graph in <a href="#bioengineering-11-00441-f008" class="html-fig">Figure 8</a>b) has a maximum between the T1 values of white matter and gray matter. Note that the gray matter is not as bright as on the wide mD dSIR image, as in (<b>a</b>). Compare the y-axis values (signal) of the purple curves in <a href="#bioengineering-11-00441-f008" class="html-fig">Figure 8</a>a,b at the T<sub>1</sub> of gray matter.</p>
Full article ">Figure 10
<p>Divided echo subtraction (dES) T<sub>2</sub>* tissue filters. (<b>a</b>) T<sub>2</sub>* filter for ultrashort and short TE sequences. The red curve is for an ultrashort TE sequence with TE = 0.05 ms. The blue curve is for a short TE sequence with TE = 2.2 ms. (<b>b</b>) T<sub>2</sub>* filter for the echo subtraction (ES) sequence. The green curve is the difference of the red and blue curves from (<b>a</b>). The subtraction increases the contrast between ultrashort T<sub>2</sub>* tissues and short/normal T<sub>2</sub>* tissues. (<b>c</b>) T<sub>2</sub>* filter for divided echo subtraction (dES) sequence. The purple curve is the difference of the red and blue curves divided by their sum. The dES filter further increases the contrast between ultrashort T<sub>2</sub>* tissues and short/normal T<sub>2</sub>* tissues.</p>
Full article ">Figure 10 Cont.
<p>Divided echo subtraction (dES) T<sub>2</sub>* tissue filters. (<b>a</b>) T<sub>2</sub>* filter for ultrashort and short TE sequences. The red curve is for an ultrashort TE sequence with TE = 0.05 ms. The blue curve is for a short TE sequence with TE = 2.2 ms. (<b>b</b>) T<sub>2</sub>* filter for the echo subtraction (ES) sequence. The green curve is the difference of the red and blue curves from (<b>a</b>). The subtraction increases the contrast between ultrashort T<sub>2</sub>* tissues and short/normal T<sub>2</sub>* tissues. (<b>c</b>) T<sub>2</sub>* filter for divided echo subtraction (dES) sequence. The purple curve is the difference of the red and blue curves divided by their sum. The dES filter further increases the contrast between ultrashort T<sub>2</sub>* tissues and short/normal T<sub>2</sub>* tissues.</p>
Full article ">Figure 11
<p>Divided Subtracted Inversion Recovery (dSIR) in a patient with multiple sclerosis (MS). T<sub>2</sub>-FLAIR (<b>left</b>), inversion recovery (IR) T<sub>1</sub>-weighted (<b>middle</b>), and wide-domain dSIR with TI<sub>short</sub> = 450 ms and TI<sub>long</sub> = 850 ms (<b>right</b>) images through the pons in a patient with MS. A large plaque is obviously present in the left hemipons on the dSIR image (red arrow in the image on the far right). The contrast in this image is due to changes in white matter T<sub>1</sub>. The change in T<sub>1</sub> is insufficient to cause noticeable contrast on the IR T<sub>1</sub> image (<b>middle</b>). The change in T<sub>2</sub> is insufficient to cause noticeable contrast on the T<sub>2</sub>-FLAIR image (<b>left</b>).</p>
Full article ">Figure 12
<p>Divided Subtracted Inversion Recovery (dSIR) in a patient with multiple sclerosis (MS). T2-FLAIR (<b>left</b>), inversion recovery (IR) T<sub>1</sub>-weighted (<b>middle</b>), and wide-domain dSIR with TI<sub>short</sub> = 450 ms and TI<sub>long</sub> = 850 ms (<b>right</b>) images through the upper corona radiata in a patient with MS. Three plaques are seen on the T<sub>2</sub>-FLAIR and IR T<sub>1</sub> images (red arrows). More plaques are seen on the dSIR image (black arrows). The plaque in the left frontal white matter is seen on the dSIR image (yellow arrow) but, due to the high signal etching along its margins, could easily be mistaken for cortex.</p>
Full article ">Figure 13
<p>Divided Subtracted Inversion Recovery (dSIR) in a patient with multiple sclerosis (MS). Three axial narrow middle domain images in a patient with an acute MS flare at the level of the centrum semiovale (<b>left</b>), corona radiata (<b>middle</b>), and basal ganglia (<b>right</b>). TI<sub>short</sub> = 350 ms. TI<sub>long</sub> = 500 ms. TE = 7 ms, TR = 5000 ms. The white matter is not black as in <a href="#bioengineering-11-00441-f009" class="html-fig">Figure 9</a>b. There is a widespread increased signal, though not a “white out” sign as described in <a href="#bioengineering-11-00441-f014" class="html-fig">Figure 14</a>. This is an “intermediate” appearance but not considered normal.</p>
Full article ">Figure 14
<p>Normal and abnormal divided Subtracted Inversion Recovery (dSIR) images. Narrow middle domain images in three patients at the level of the centrum semiovale. TI<sub>short</sub> = 350 ms. TI<sub>long</sub> = 500 ms. TE = 7 ms, TR = 5000 ms. The left image shows an example of the “white out sign”, with a diffusely increased signal throughout the white matter. The center image is an example of normal. The white matter has a mildly increased signal that is normal because TI<sub>short</sub> = 350 ms nulls tissue with T<sub>1</sub> values less than that of white matter. The image on the right has an intermediate appearance, probably abnormal but not a “white out”.</p>
Full article ">Figure 15
<p>Divided Subtracted Inversion Recovery (dSIR) in a patient with Grinker’s myelinopathy. <b>Top row</b>: Narrow middle domain dSIR images at the level of the centrum semiovale (<b>left</b>), corona radiata (<b>middle</b>), and basal ganglia (<b>right</b>) in a patient with persistent symptoms following prolonged hypoxia due to a suicide attempt. TI<sub>short</sub> = 350 ms. TI<sub>long</sub> = 500 ms. TE = 7 ms, TR = 5000 ms. There is a diffuse “white out”. <b>Bottom row</b>: T<sub>2</sub>-FLAIR images at matching levels show normal-appearing white matter. Scans were obtained 9 months following injury.</p>
Full article ">Figure 16
<p>Divided Subtracted Inversion Recovery (dSIR) in a patient with Grinker’s myelinopathy. <b>Top row</b>: Narrow middle domain dSIR images at the level of the centrum semiovale (<b>left</b>), corona radiata (<b>middle</b>), and basal ganglia (<b>right</b>) in a patient with persistent symptoms following prolonged hypoxia due to drug overdose. TI<sub>short</sub> = 350 ms. TI<sub>long</sub> = 500 ms. TE = 7 ms, TR = 5000 ms. There is widespread “white out”, with some sparing in the deep frontal lobe white matter. <b>Bottom row</b>: T<sub>2</sub>-FLAIR images at matching levels show normal-appearing white matter. Scans were obtained 2 years following injury.</p>
Full article ">Figure 17
<p>Two boys with mild head trauma. <b>Top row</b>: Narrow middle domain divided Subtracted Inversion Recovery (dSIR) images at the level of the centrum semiovale (<b>left</b>), corona radiata (<b>middle</b>), and basal ganglia (<b>right</b>) in two young men experiencing mild head trauma in the same rugby match. TI<sub>short</sub> = 350 ms. TI<sub>long</sub> = 500 ms. TE = 7 ms, TR = 5000 ms. Images were obtained within 5 days of injury. The player shown in the <b>top row</b> had symptoms of concussion at the time of imaging, and a “white out” sign is present. The player shown in the <b>bottom row</b> was asymptomatic, and the images appear normal.</p>
Full article ">Figure 18
<p>Divided Subtracted Inversion Recovery (dSIR) in a patient methamphetamine user. Narrow middle domain dSIR images at the level of the centrum semiovale (<b>left</b>), corona radiata (<b>middle</b>), and basal ganglia (<b>right</b>) in a volunteer immediately after a methamphetamine binge (<b>top row</b>) and 4 months into abstinence (<b>bottom row</b>) TI<sub>short</sub> = 350 ms. TI<sub>long</sub> = 500 ms. TE = 7 ms, TR = 5000 ms. The <b>top row</b> images show the “white out” sign, indicating diffuse mild white matter T<sub>1</sub> elevation. The signal in the white matter partially normalizes on the bottom row. The appearance is closer to intermediate than normal, but there is definite improvement.</p>
Full article ">Figure 19
<p>Divided Subtracted Inversion Recovery (dSIR) in a patient with a mild traumatic brain injury. Narrow middle domain dSIR images at the level of the centrum semiovale (<b>left</b>), corona radiata (<b>middle</b>), and basal ganglia (<b>right</b>) in a volunteer within five days of an mTBI (<b>top row</b>) and two weeks later (<b>bottom row</b>). TI<sub>short</sub> = 350 ms. TI<sub>long</sub> = 500 ms. TE = 7 ms, TR = 5000 ms. The <b>top row</b> images show the “white out” sign, indicating diffuse mild white matter T<sub>1</sub> elevation. The signal in the white matter normalizes on the bottom row, where it appears normal.</p>
Full article ">Figure 20
<p>Divided Subtracted Inversion Recovery (dSIR) T<sub>1</sub> filter with the shorter inversion time chosen too high. The red and blue curves are the T<sub>1</sub> filters for IR sequences designed to null white (red curve) and gray (blue curve) matter. The TI for the red curve has been chosen too high. Small increases in T<sub>1</sub> from normal in the white matter will result in decreased as opposed to increased signal on the dSIR filter (purple curve). GM—T<sub>1</sub> value of gray matter. WM—T<sub>1</sub> value of white matter.</p>
Full article ">Figure 21
<p>Divided echo subtraction (dES) (<b>a</b>) Axial ultrashort TE image through the lower leg with TE = 0.03 ms. Cortical bone (red arrows), and aponeuroses (purple arrows) are dark. The fascial layers (yellow arrows) are thin and poorly seen. (<b>b</b>) The same axial ultrashort TE image through the lower leg as in (<b>a</b>) displayed with an inverted gray scale. Cortical bone (red arrows) and aponeuroses (purple arrows) are bright. The fascial layers (yellow arrows) are thin and poorly seen. (<b>c</b>) Axial ES image created by subtracting a TE = 2.2 ms image from the TE = 0.03 ms image. The contrast between aponeuroses (purple arrows), fascia (yellow arrows), and muscle is increased compared to (<b>b</b>). (<b>d</b>) Axial dES image created by dividing the difference of the TE = 2.2 ms and TE = 0.03 ms images by their sum. The contrast between aponeuroses (purple arrows), fascia (yellow arrows), and muscle is improved compared to (<b>b</b>,<b>c</b>).</p>
Full article ">Figure 21 Cont.
<p>Divided echo subtraction (dES) (<b>a</b>) Axial ultrashort TE image through the lower leg with TE = 0.03 ms. Cortical bone (red arrows), and aponeuroses (purple arrows) are dark. The fascial layers (yellow arrows) are thin and poorly seen. (<b>b</b>) The same axial ultrashort TE image through the lower leg as in (<b>a</b>) displayed with an inverted gray scale. Cortical bone (red arrows) and aponeuroses (purple arrows) are bright. The fascial layers (yellow arrows) are thin and poorly seen. (<b>c</b>) Axial ES image created by subtracting a TE = 2.2 ms image from the TE = 0.03 ms image. The contrast between aponeuroses (purple arrows), fascia (yellow arrows), and muscle is increased compared to (<b>b</b>). (<b>d</b>) Axial dES image created by dividing the difference of the TE = 2.2 ms and TE = 0.03 ms images by their sum. The contrast between aponeuroses (purple arrows), fascia (yellow arrows), and muscle is improved compared to (<b>b</b>,<b>c</b>).</p>
Full article ">
15 pages, 7147 KiB  
Article
A Novel Mis-Seg-Focus Loss Function Based on a Two-Stage nnU-Net Framework for Accurate Brain Tissue Segmentation
by Keyi He, Bo Peng, Weibo Yu, Yan Liu, Surui Liu, Jian Cheng and Yakang Dai
Bioengineering 2024, 11(5), 427; https://doi.org/10.3390/bioengineering11050427 - 26 Apr 2024
Viewed by 1582
Abstract
Brain tissue segmentation plays a critical role in the diagnosis, treatment, and study of brain diseases. Accurately identifying these boundaries is essential for improving segmentation accuracy. However, distinguishing boundaries between different brain tissues can be challenging, as they often overlap. Existing deep learning [...] Read more.
Brain tissue segmentation plays a critical role in the diagnosis, treatment, and study of brain diseases. Accurately identifying these boundaries is essential for improving segmentation accuracy. However, distinguishing boundaries between different brain tissues can be challenging, as they often overlap. Existing deep learning methods primarily calculate the overall segmentation results without adequately addressing local regions, leading to error propagation and mis-segmentation along boundaries. In this study, we propose a novel mis-segmentation-focused loss function based on a two-stage nnU-Net framework. Our approach aims to enhance the model’s ability to handle ambiguous boundaries and overlapping anatomical structures, thereby achieving more accurate brain tissue segmentation results. Specifically, the first stage targets the identification of mis-segmentation regions using a global loss function, while the second stage involves defining a mis-segmentation loss function to adaptively adjust the model, thus improving its capability to handle ambiguous boundaries and overlapping anatomical structures. Experimental evaluations on two datasets demonstrate that our proposed method outperforms existing approaches both quantitatively and qualitatively. Full article
(This article belongs to the Special Issue Novel MRI Techniques and Biomedical Image Processing)
Show Figures

Figure 1

Figure 1
<p><b>Overall architecture of the multi-label brain tissue segmentation method.</b> It includes the enhanced nnU-Net main network framework (comprising nnU-Net network preprocessing, an enhanced nnU-Net network framework, and nnU-Net network post-processing), network framework information description, and MS region extraction section.</p>
Full article ">Figure 2
<p><b>Visualization of the segmentation results on the dHCP dataset</b>. Advanced deep learning brain tissue segmentation methods were compared with the nnU-Net baseline network and our proposed method. Each plane demonstrates the segmentation results of different methods, with the mis-segmentation regions (shown in red) displayed beneath the segmentation outcomes.</p>
Full article ">Figure 3
<p><b>Visualization of the segmentation results on the OASIS dataset</b>. Advanced deep learning brain tissue segmentation methods were compared with the nnU-Net baseline network and our proposed method. Each plane demonstrates the segmentation results of different methods, with the mis-segmentation regions (shown in red) displayed beneath the segmentation outcomes.</p>
Full article ">Figure 4
<p><b>Experimental results with different weights.</b> The ordinate of the coordinate chart is Dice as an evaluation metric, which shows a total of nine kinds of brain tissue and average results.</p>
Full article ">Figure 5
<p><b>Comparison of the mis-segmentation regions in different epochs of the two stages on the dHCP dataset.</b> Three types of brain tissues (hippocampus, ventricle, and white matter) and the whole brain are visualized. The last epoch is the final round result of this stage. The red region is the mis-segmentation region.</p>
Full article ">
16 pages, 4118 KiB  
Article
Brain Age Prediction Using Multi-Hop Graph Attention Combined with Convolutional Neural Network
by Heejoo Lim, Yoonji Joo, Eunji Ha, Yumi Song, Sujung Yoon and Taehoon Shin
Bioengineering 2024, 11(3), 265; https://doi.org/10.3390/bioengineering11030265 - 8 Mar 2024
Cited by 2 | Viewed by 1967
Abstract
Convolutional neural networks (CNNs) have been used widely to predict biological brain age based on brain magnetic resonance (MR) images. However, CNNs focus mainly on spatially local features and their aggregates and barely on the connective information between distant regions. To overcome this [...] Read more.
Convolutional neural networks (CNNs) have been used widely to predict biological brain age based on brain magnetic resonance (MR) images. However, CNNs focus mainly on spatially local features and their aggregates and barely on the connective information between distant regions. To overcome this issue, we propose a novel multi-hop graph attention (MGA) module that exploits both the local and global connections of image features when combined with CNNs. After insertion between convolutional layers, MGA first converts the convolution-derived feature map into graph-structured data by using patch embedding and embedding-distance-based scoring. Multi-hop connections between the graph nodes are modeled by using the Markov chain process. After performing multi-hop graph attention, MGA re-converts the graph into an updated feature map and transfers it to the next convolutional layer. We combined the MGA module with sSE (spatial squeeze and excitation)-ResNet18 for our final prediction model (MGA-sSE-ResNet18) and performed various hyperparameter evaluations to identify the optimal parameter combinations. With 2788 three-dimensional T1-weighted MR images of healthy subjects, we verified the effectiveness of MGA-sSE-ResNet18 with comparisons to four established, general-purpose CNNs and two representative brain age prediction models. The proposed model yielded an optimal performance with a mean absolute error of 2.822 years and Pearson’s correlation coefficient (PCC) of 0.968, demonstrating the potential of the MGA module to improve the accuracy of brain age prediction. Full article
(This article belongs to the Special Issue Novel MRI Techniques and Biomedical Image Processing)
Show Figures

Figure 1

Figure 1
<p>An overview of the proposed multi-hop graph attention (MGA) module. MGA contains <span class="html-italic">k</span> independent branches that handle patch embeddings of different sizes. Each branch constructs a node set by using patch embedding and aggregation, as well as an edge set based on similarity scores among the node embeddings. Graph attention is applied to update the node set with consideration of multi-hop inter-node relationships. The updated feature patches from different branches are ensembled to obtain a final MGA output.</p>
Full article ">Figure 2
<p>Illustration of the dynamics of multi-hop transitions across nodes in the graph structure.</p>
Full article ">Figure 3
<p>Overview of the proposed MGA-sSE-ResNet18 for brain age prediction. The dotted box, a key network component, consists of two convolutional layers followed by a parallel combination of the sSE and MGA modules and is repeated eight times. Residual connection applies across the two convolutional layers and across the combination of the sSE and MGA modules.</p>
Full article ">Figure 4
<p>Sample images of each age group of T1 brain MRI: (<b>A</b>) aged 20, (<b>B</b>) aged 30, (<b>C</b>) aged 40, (<b>D</b>) aged 50, and (<b>E</b>) aged 60.</p>
Full article ">Figure 5
<p>Performance of MGA-sSE-ResNet18 with different hop sizes (<span class="html-italic">m</span>) for brain age prediction. The test MAEs are represented in blue with the scale on the left vertical axis, and the training MAEs are represented in red with the scale on the right axis. The errors of MSA-sSE-ResNet18 are depicted as dotted lines, with blue and red representing the MAEs of test and training datasets, respectively.</p>
Full article ">Figure 6
<p>Effect of different <span class="html-italic">ϒ</span> and <span class="html-italic">k</span> combinations on brain age prediction of MGA-sSE-ResNet18. The prediction errors obtained using one branch (<span class="html-italic">k</span> = 1) or two branches (<span class="html-italic">k</span> = 2) are colored in gray and red, respectively.</p>
Full article ">Figure 7
<p>Effect of edge weight coefficient <span class="html-italic">β</span> on the performance of brain age prediction.</p>
Full article ">Figure 8
<p>Training and validation loss graph of MGA-sSE-ResNet18. From around 400 epochs onward, the validation loss converges at approximately 2.7 years, which is slightly smaller than the test loss (2.8 years reported in <a href="#bioengineering-11-00265-t003" class="html-table">Table 3</a>) and demonstrates effective generalization of our model for unseen data.</p>
Full article ">Figure 9
<p>Scatter plots of brain age prediction on test dataset from two different prediction models (backbone and the proposed model). The dotted blue lines represent the ideal prediction, where chronological ages equal predicted ages, and the red lines indicate the linear regressions fitted by model predictions.</p>
Full article ">
18 pages, 13118 KiB  
Article
Joint Image Reconstruction and Super-Resolution for Accelerated Magnetic Resonance Imaging
by Wei Xu, Sen Jia, Zhuo-Xu Cui, Qingyong Zhu, Xin Liu, Dong Liang and Jing Cheng
Bioengineering 2023, 10(9), 1107; https://doi.org/10.3390/bioengineering10091107 - 21 Sep 2023
Viewed by 2019
Abstract
Magnetic resonance (MR) image reconstruction and super-resolution are two prominent techniques to restore high-quality images from undersampled or low-resolution k-space data to accelerate MR imaging. Combining undersampled and low-resolution acquisition can further improve the acceleration factor. Existing methods often treat the techniques of [...] Read more.
Magnetic resonance (MR) image reconstruction and super-resolution are two prominent techniques to restore high-quality images from undersampled or low-resolution k-space data to accelerate MR imaging. Combining undersampled and low-resolution acquisition can further improve the acceleration factor. Existing methods often treat the techniques of image reconstruction and super-resolution separately or combine them sequentially for image recovery, which can result in error propagation and suboptimal results. In this work, we propose a novel framework for joint image reconstruction and super-resolution, aiming to efficiently image recovery and enable fast imaging. Specifically, we designed a framework with a reconstruction module and a super-resolution module to formulate multi-task learning. The reconstruction module utilizes a model-based optimization approach, ensuring data fidelity with the acquired k-space data. Moreover, a deep spatial feature transform is employed to enhance the information transition between the two modules, facilitating better integration of image reconstruction and super-resolution. Experimental evaluations on two datasets demonstrate that our proposed method can provide superior performance both quantitatively and qualitatively. Full article
(This article belongs to the Special Issue Novel MRI Techniques and Biomedical Image Processing)
Show Figures

Figure 1

Figure 1
<p>The overall architecture of our proposed network. RRDB: Residual in Residual Dense Block, SFT: Spatial Feature Transform, DC: Data Consistency.</p>
Full article ">Figure 2
<p>(<b>a</b>) The architecture of Residual in Residual Dense Block (RRDB). <math display="inline"><semantics> <mrow> <mi>β</mi> <mo>=</mo> <mn>0.2</mn> </mrow> </semantics></math> is a scaling parameter. (<b>b</b>) The architecture of Dense Block in RRDB. (<b>c</b>) The detailed structure of Spatial Feature Transform (SFT) layer.</p>
Full article ">Figure 3
<p>The cropping masks, undersampling masks in LR k-space and examples of undersampled LR MR image and fully sampled HR MR image sample from the brain dataset (upper) and the VWI dataset (lower).</p>
Full article ">Figure 4
<p>Zoomed-in view of the ablation experiment. The yellow arrows point to the fake structures in the images and the red arrows represent the fine details that can be recovered by our proposed method compared to other methods.</p>
Full article ">Figure 5
<p>Visual comparison of each method for a slice of axial view in the brain dataset.</p>
Full article ">Figure 6
<p>Visual comparison of each method for a slice of sagittal view in the brain dataset.</p>
Full article ">Figure 7
<p>Visual comparison of each method for a brain image in the VWI dataset.</p>
Full article ">Figure 8
<p>Visual comparison of each method for a neck image in the VWI dataset.</p>
Full article ">Figure 9
<p>Zoomed-in view of the brain dataset comparison experiment.</p>
Full article ">Figure 10
<p>Zoomed-in view of the VWI dataset comparison experiment.</p>
Full article ">
14 pages, 2538 KiB  
Article
MRI-Based Deep Learning Method for Classification of IDH Mutation Status
by Chandan Ganesh Bangalore Yogananda, Benjamin C. Wagner, Nghi C. D. Truong, James M. Holcomb, Divya D. Reddy, Niloufar Saadat, Kimmo J. Hatanpaa, Toral R. Patel, Baowei Fei, Matthew D. Lee, Rajan Jain, Richard J. Bruce, Marco C. Pinho, Ananth J. Madhuranthakam and Joseph A. Maldjian
Bioengineering 2023, 10(9), 1045; https://doi.org/10.3390/bioengineering10091045 - 5 Sep 2023
Cited by 5 | Viewed by 2780
Abstract
Isocitrate dehydrogenase (IDH) mutation status has emerged as an important prognostic marker in gliomas. This study sought to develop deep learning networks for non-invasive IDH classification using T2w MR images while comparing their performance to a multi-contrast network. Methods: Multi-contrast brain tumor MRI [...] Read more.
Isocitrate dehydrogenase (IDH) mutation status has emerged as an important prognostic marker in gliomas. This study sought to develop deep learning networks for non-invasive IDH classification using T2w MR images while comparing their performance to a multi-contrast network. Methods: Multi-contrast brain tumor MRI and genomic data were obtained from The Cancer Imaging Archive (TCIA) and The Erasmus Glioma Database (EGD). Two separate 2D networks were developed using nnU-Net, a T2w-image-only network (T2-net) and a multi-contrast network (MC-net). Each network was separately trained using TCIA (227 subjects) or TCIA + EGD data (683 subjects combined). The networks were trained to classify IDH mutation status and implement single-label tumor segmentation simultaneously. The trained networks were tested on over 1100 held-out datasets including 360 cases from UT Southwestern Medical Center, 136 cases from New York University, 175 cases from the University of Wisconsin–Madison, 456 cases from EGD (for the TCIA-trained network), and 495 cases from the University of California, San Francisco public database. A receiver operating characteristic curve (ROC) was drawn to calculate the AUC value to determine classifier performance. Results: T2-net trained on TCIA and TCIA + EGD datasets achieved an overall accuracy of 85.4% and 87.6% with AUCs of 0.86 and 0.89, respectively. MC-net trained on TCIA and TCIA + EGD datasets achieved an overall accuracy of 91.0% and 92.8% with AUCs of 0.94 and 0.96, respectively. We developed reliable, high-performing deep learning algorithms for IDH classification using both a T2-image-only and a multi-contrast approach. The networks were tested on more than 1100 subjects from diverse databases, making this the largest study on image-based IDH classification to date. Full article
(This article belongs to the Special Issue Novel MRI Techniques and Biomedical Image Processing)
Show Figures

Figure 1

Figure 1
<p>Ground truth tumor masks. The green voxels represent IDH wildtype (values of 2). The red voxels represent IDH mutated (values of 1). The ground truth labels have the same mutation status for all voxels in each tumor.</p>
Full article ">Figure 2
<p>Overview of voxel-wise classification of IDH mutation status. Volumes were combined through dual-class fusion to remove false positives and create a tumor segmentation volume. Majority voting was applied across the voxels to predict the overall IDH mutation status.</p>
Full article ">Figure 3
<p>(<b>A</b>) ROC analysis for T2-net. (<b>B</b>) ROC analysis for MC-net.</p>
Full article ">Figure 4
<p>(<b>A</b>) Example voxel-wise segmentation for an IDH-mutated and IDH-wildtype tumor. T2 image (<b>a</b>). Ground truth segmentation (<b>b</b>). Voxel-wise predictions without DCF (<b>c</b>) and after DCF (<b>d</b>). Yellow arrows in indicate false positives. Red voxels depict IDH-mutated class, and green voxels depict IDH wildtype (<b>B</b>).</p>
Full article ">
32 pages, 12976 KiB  
Article
A New Medical Analytical Framework for Automated Detection of MRI Brain Tumor Using Evolutionary Quantum Inspired Level Set Technique
by Saad M. Darwish, Lina J. Abu Shaheen and Adel A. Elzoghabi
Bioengineering 2023, 10(7), 819; https://doi.org/10.3390/bioengineering10070819 - 9 Jul 2023
Cited by 4 | Viewed by 2516
Abstract
Segmenting brain tumors in 3D magnetic resonance imaging (3D-MRI) accurately is critical for easing the diagnostic and treatment processes. In the field of energy functional theory-based methods for image segmentation and analysis, level set methods have emerged as a potent computational approach that [...] Read more.
Segmenting brain tumors in 3D magnetic resonance imaging (3D-MRI) accurately is critical for easing the diagnostic and treatment processes. In the field of energy functional theory-based methods for image segmentation and analysis, level set methods have emerged as a potent computational approach that has greatly aided in the advancement of the geometric active contour model. An important factor in reducing segmentation error and the number of required iterations when using the level set technique is the choice of the initial contour points, both of which are important when dealing with the wide range of sizes, shapes, and structures that brain tumors may take. To define the velocity function, conventional methods simply use the image gradient, edge strength, and region intensity. This article suggests a clustering method influenced by the Quantum Inspired Dragonfly Algorithm (QDA), a metaheuristic optimizer inspired by the swarming behaviors of dragonflies, to accurately extract initial contour points. The proposed model employs a quantum-inspired computing paradigm to stabilize the trade-off between exploitation and exploration, thereby compensating for any shortcomings of the conventional DA-based clustering method, such as slow convergence or falling into a local optimum. To begin, the quantum rotation gate concept can be used to relocate a colony of agents to a location where they can better achieve the optimum value. The main technique is then given a robust local search capacity by adopting a mutation procedure to enhance the swarm’s mutation and realize its variety. After a preliminary phase in which the cranium is disembodied from the brain, tumor contours (edges) are determined with the help of QDA. An initial contour for the MRI series will be derived from these extracted edges. The final step is to use a level set segmentation technique to isolate the tumor area across all volume segments. When applied to 3D-MRI images from the BraTS’ 2019 dataset, the proposed technique outperformed state-of-the-art approaches to brain tumor segmentation, as shown by the obtained results. Full article
(This article belongs to the Special Issue Novel MRI Techniques and Biomedical Image Processing)
Show Figures

Figure 1

Figure 1
<p>Voxel and slice in 3D MRI data. A slice is just like a 2D image stored in a matrix of size M × N. The smallest unit of a slice is a voxel i.e., a volumetric pixel with certain dimensions. MR data are a stack of 2D images acquired in 3D space while a person walking with a camera along any one of three spatial dimensions. If a person is lying on an MRI bed, the <span class="html-italic">z</span>-axis then becomes upward. The axial plane corresponds to XZ Plane, the Coronal plane corresponds to the XY plane and the Sagittal plane corresponds to the YZ plane.</p>
Full article ">Figure 2
<p>Automatically segmenting brain tumors. The whole tumor (WT) class includes all visible labels (a union of green, yellow, and red labels), the tumor core (TC) class is a union of red and yellow, and the enhancing tumor core (ET) class is shown in yellow (a hyperactive tumor part). The predicted segmentation results match the ground truth well.</p>
Full article ">Figure 3
<p>Demonstration of level set segmentation of white matter in a brain. An adaptive initial contouring method is performed to obtain an approximate circular contour of the tumor (red lines). Finally, the deformation-based level set segmentation automatically extracts the precise contours of tumors from each individual axial 2D MRI slice separately and independently. Temporal ordering is from left to right, top to bottom, to track the dynamic change of the contour of the tumor over different iterations (time).</p>
Full article ">Figure 4
<p>The suggested QDA-based methodology for detecting brain tumors: (<b>Left</b>) flowchart, (<b>Right</b>) graphical representation.</p>
Full article ">Figure 5
<p>A consequence of skull-stripping MRI on the brain. (<b>a</b>) Tissue from the initial MRI image of the brain, and (<b>b</b>) brain without the skull.</p>
Full article ">Figure 6
<p>(<b>a</b>) Synthetic MR brain image, axial section, maximum intensity noise (5%); (<b>b</b>) image filtered with fixed Gaussian window size; (<b>c</b>) image filtered with decreasing window size at the same number of iterations. A Gaussian Filter is a low-pass filter used for reducing noise (high-frequency components). The kernel is not hard on drastic color changes (edges) due to the pixels towards the center of the kernel having more weightage towards the final value than the periphery.</p>
Full article ">Figure 7
<p>Histogram equalization technique (<b>a</b>) original image, (<b>b</b>) histogram image for (<b>a</b>), (<b>c</b>) histogram equalized image for (<b>a</b>), (<b>d</b>) histogram image for (<b>c</b>) equalize the two sub images.</p>
Full article ">Figure 8
<p>Removal of noise in MRI images (<b>a</b>) normal MRI, (<b>b</b>) noisy MRI, (<b>c</b>) denoised MRI for both T1 modality (Upper row) and T2 modality (Lower row). The role of sigma in the Gaussian filter is to control the variation around its mean value. So as the Sigma becomes larger the more variance allowed around mean and as the Sigma becomes smaller the less variance allowed around mean.</p>
Full article ">Figure 9
<p>QDA with various quantum rotation angles <span class="html-italic">θ</span>, to reach to best solutions eventually, to global optimal. During the searching journey, a dragonfly individual has several directions to move to locally optimal solutions (Local optimal 1, 2, and 3) based on their inertia search directions or Lévy flight limitations; QDA is utilized to replace these two searching behaviors and escape from the local solution.</p>
Full article ">Figure 10
<p>Brain axial section gray matter and white matter. Micrograph showing normal white matter (left of image-lighter shade of pink) and normal grey matter (right of image-dark shade of pink). Gray matter is made up of neuronal cell bodies, while white matter primarily consists of myelinated axons. In the brain, white matter is found closer to the center of the brain, whereas the outer cortex is mainly grey matter.</p>
Full article ">Figure 11
<p>(<b>a</b>) Segmented cerebrospinal fluid (CSF), (<b>b</b>) segmented gray matter, and (<b>c</b>) segmented white matter. The three-dimensional MRI T1 brain image was considered with the following five layers: scalp, skull, cerebral spinal fluid (CSF), gray matter, and white matter.</p>
Full article ">Figure 12
<p>K-means clustering flowchart. The k-means method aims to divide a set of <span class="html-italic">N</span> objects into <span class="html-italic">k</span> clusters, where each cluster is represented by the mean value of its objects.</p>
Full article ">Figure 13
<p>Flowchart of Dragonfly algorithm. Dragonflies form sub-swarms and fly over different areas in a static swarm. This is similar to exploration, and it aids the algorithm in locating appropriate search space locations. On the other hand, dragonflies in a dynamic swarm fly in a larger swarm and in the same direction. In addition, this type of swarming is the same as using an algorithm to assist it converges to the global best solution.</p>
Full article ">Figure 14
<p>The updating of a quantum bit state vector, where <math display="inline"><semantics><mrow><msup><mrow><mfenced><mrow><msub><mi>α</mi><mi>i</mi></msub><mo>,</mo><msub><mi>β</mi><mi>i</mi></msub></mrow></mfenced></mrow><mi>T</mi></msup></mrow></semantics></math> and <math display="inline"><semantics><mrow><msup><mrow><mfenced><mrow><msub><mrow><mover><mi>α</mi><mo>´</mo></mover></mrow><mi>i</mi></msub><mo>,</mo><msub><mrow><mover><mi>β</mi><mo>´</mo></mover></mrow><mi>i</mi></msub></mrow></mfenced></mrow><mi>T</mi></msup></mrow></semantics></math> show the quantum bit state vector before and after the rotation gate updating of the <span class="html-italic">i</span>th quantum bit of chromosome; <math display="inline"><semantics><mrow><msub><mi>θ</mi><mi>i</mi></msub></mrow></semantics></math> shows the <span class="html-italic">i</span>th rotation angle to control the convergence rate. The update strategy of the quantum chromosome in the quantum rotation gate is to compare the fitness of the current individual with that of the optimal individual, select the better one, and then rotate to it.</p>
Full article ">Figure 15
<p>Level set function—An overview. The level set approach takes the original curve (the red one on the left) and builds it into a surface. That cone-shaped surface, which is shown in blue on the right below, has a great property; it intersects the XY plane exactly where the curve sits. The blue surface is called the level set function because it accepts as input any point in the plane and hands back its height as output. The red front is called the zero level set because it is the collection of all points that are at height zero.</p>
Full article ">Figure 16
<p>General image segmentation algorithm using the level set function. Given a certain area Ω with an edge Γ. The velocity of the edge <span class="html-italic">ν</span> between steps depends on the position, shape, time, and external conditions. The function <math display="inline"><semantics><mrow><mi>φ</mi><mfenced><mrow><mi>x</mi><mo>,</mo><mi>t</mi></mrow></mfenced></mrow></semantics></math> where <span class="html-italic">x</span> is the position in the Cartesian space and <span class="html-italic">t</span> is the time, describing the moving contour. <math display="inline"><semantics><mrow><mi>E</mi><mfenced><mi>φ</mi></mfenced></mrow></semantics></math> is the energy, <math display="inline"><semantics><mrow><msub><mi>R</mi><mi>p</mi></msub><mfenced><mi>φ</mi></mfenced></mrow></semantics></math> is the level set regularization term, <math display="inline"><semantics><mrow><msub><mi>L</mi><mi>p</mi></msub><mfenced><mi>φ</mi></mfenced></mrow></semantics></math> is minimized when the zero level contour is located at the object boundaries and <math display="inline"><semantics><mrow><msub><mi>S</mi><mi>g</mi></msub><mfenced><mi>φ</mi></mfenced></mrow></semantics></math> is introduced to speed up the motion of the zero level contour in the level set evolution process.</p>
Full article ">Figure 17
<p>Example of level set brain tumor segmentation. (<b>a</b>) Original image with initial contours (red line) based on a clustering method influenced by QDA to accurately extract initial contour points (<b>b</b>) segmented tumor using level set function.</p>
Full article ">Figure 18
<p>Brain tumor segmentation (First row) 2D slice (Second row) Final segmentation using QDA (blue areas).</p>
Full article ">Figure 18 Cont.
<p>Brain tumor segmentation (First row) 2D slice (Second row) Final segmentation using QDA (blue areas).</p>
Full article ">Figure 19
<p>Brain tumor segmentation (left column) 2D slice (right column) Final segmentation using QDA (blue areas).</p>
Full article ">Figure 20
<p>MRI scans of different tumor types in different planes—the red circles highlight the tumor in the images. The example is shown for each tumor type in each plane. The first row is for the meningioma, which is a tumor that arises from the meninges. The second row is for glioma, which is the growth of cells. That third row is for pituitary tumors, which are unusual growths that develop in the pituitary gland. The columns from left to right represent the MRI image scans from axial, coronal, and sagittal planes.</p>
Full article ">Figure 21
<p>(<b>A</b>) Original image, showing two tumors in a representative axial slice; (<b>B</b>) the detection result of our proposed method (green circles).</p>
Full article ">

Review

Jump to: Research, Other

27 pages, 15749 KiB  
Review
Emerging Trends in Magnetic Resonance Fingerprinting for Quantitative Biomedical Imaging Applications: A Review
by Anmol Monga, Dilbag Singh, Hector L. de Moura, Xiaoxia Zhang, Marcelo V. W. Zibetti and Ravinder R. Regatte
Bioengineering 2024, 11(3), 236; https://doi.org/10.3390/bioengineering11030236 - 28 Feb 2024
Cited by 1 | Viewed by 3146
Abstract
Magnetic resonance imaging (MRI) stands as a vital medical imaging technique, renowned for its ability to offer high-resolution images of the human body with remarkable soft-tissue contrast. This enables healthcare professionals to gain valuable insights into various aspects of the human body, including [...] Read more.
Magnetic resonance imaging (MRI) stands as a vital medical imaging technique, renowned for its ability to offer high-resolution images of the human body with remarkable soft-tissue contrast. This enables healthcare professionals to gain valuable insights into various aspects of the human body, including morphology, structural integrity, and physiological processes. Quantitative imaging provides compositional measurements of the human body, but, currently, either it takes a long scan time or is limited to low spatial resolutions. Undersampled k-space data acquisitions have significantly helped to reduce MRI scan time, while compressed sensing (CS) and deep learning (DL) reconstructions have mitigated the associated undersampling artifacts. Alternatively, magnetic resonance fingerprinting (MRF) provides an efficient and versatile framework to acquire and quantify multiple tissue properties simultaneously from a single fast MRI scan. The MRF framework involves four key aspects: (1) pulse sequence design; (2) rapid (undersampled) data acquisition; (3) encoding of tissue properties in MR signal evolutions or fingerprints; and (4) simultaneous recovery of multiple quantitative spatial maps. This paper provides an extensive literature review of the MRF framework, addressing the trends associated with these four key aspects. There are specific challenges in MRF for all ranges of magnetic field strengths and all body parts, which can present opportunities for further investigation. We aim to review the best practices in each key aspect of MRF, as well as for different applications, such as cardiac, brain, and musculoskeletal imaging, among others. A comprehensive review of these applications will enable us to assess future trends and their implications for the translation of MRF into these biomedical imaging applications. Full article
(This article belongs to the Special Issue Novel MRI Techniques and Biomedical Image Processing)
Show Figures

Figure 1

Figure 1
<p>Pipeline for MRF acquisition, reconstruction, and parametric maps. (<b>A</b>) The pseudo-random repetition time (TR) and flip angle (FA) trains to introduce incoherence in acquisition. (<b>B</b>) The reconstructed image from k-space acquired in the acquisition step. In the image number dimension, the images are color coded corresponding to its TR and FA values (annotated in A). (<b>C</b>) Dictionary simulation corresponding to the tissue properties in the region of interest. (<b>D</b>) Matching between the simulated dictionary (red line) and the acquired signal evolution (black line) of a voxel. (<b>E</b>) The parametric maps (<math display="inline"><semantics> <mrow> <msub> <mrow> <mi mathvariant="normal">T</mi> </mrow> <mrow> <mn>1</mn> </mrow> </msub> </mrow> </semantics></math>, <math display="inline"><semantics> <mrow> <msub> <mrow> <mi mathvariant="normal">T</mi> </mrow> <mrow> <mn>2</mn> </mrow> </msub> </mrow> </semantics></math>, and <math display="inline"><semantics> <mrow> <msub> <mrow> <mi mathvariant="normal">M</mi> </mrow> <mrow> <mn>0</mn> </mrow> </msub> </mrow> </semantics></math>) generated after all acquired voxels are matched against the simulated dictionary. The image is derived from [<a href="#B7-bioengineering-11-00236" class="html-bibr">7</a>].</p>
Full article ">Figure 2
<p>(<b>a</b>) The general configuration of different MRF reconstruction pipelines. (<b>b</b>) The conventional MRF reconstruction pipeline, with NUFFT used to reconstruct the image space and dictionary matching to recover the parametric maps. In (<b>c</b>), a model-based MRF reconstruction approach is shown, where the images are iteratively estimated from k-space, using image models, such as low-rank constraints. Dictionary matching is used to extract the parametric maps from the images. In (<b>d</b>), an unrolled network configuration for MRF reconstruction is shown, where an iterative-like structure is composed of a Bloch manifold projector module (BM), a learned decomposition module (CP), and a data-consistency module (DC). In (<b>e</b>), a mixed approach is shown, combining NUFFT to compute images and a deep learning network to produce parametric maps. The image was built from scratch but was inspired by [<a href="#B44-bioengineering-11-00236" class="html-bibr">44</a>,<a href="#B45-bioengineering-11-00236" class="html-bibr">45</a>].</p>
Full article ">Figure 3
<p>Workflow for cardiac MRF (cMRF). The workflow comprises (<b>a</b>) ECG-triggered MRF acquisition with motion corrected MRF image reconstruction; (<b>b</b>) simulating dictionaries corresponding to specific tissue properties by varying the acquisition parameters; (<b>c</b>) dictionary matching; and (<b>d</b>) the cardiac parametric map reconstructed after dictionary matching. The figure was derived from [<a href="#B15-bioengineering-11-00236" class="html-bibr">15</a>].</p>
Full article ">Figure 4
<p>Illustrates significant differences in <math display="inline"><semantics> <mrow> <msub> <mrow> <mi mathvariant="normal">T</mi> </mrow> <mrow> <mn>1</mn> </mrow> </msub> </mrow> </semantics></math> and <math display="inline"><semantics> <mrow> <msub> <mrow> <mi mathvariant="normal">T</mi> </mrow> <mrow> <mn>2</mn> </mrow> </msub> </mrow> </semantics></math> relaxation times between group with Parkinson disease and control across different regions of the brain. NAWM: normal-appearing white matter. In this analysis there are 25 subjects per group. This figure is taken from [<a href="#B82-bioengineering-11-00236" class="html-bibr">82</a>]. <span class="html-italic">p</span>-values: * &lt; 0.05; ** &lt; 0.01; *** &lt; 0.001.</p>
Full article ">Figure 5
<p>The 3D full-coverage brain acquisition with spiral trajectories. The k-space trajectories are interleaved and varied across the slice index to maximize k-space coverage. This figure is derived from [<a href="#B107-bioengineering-11-00236" class="html-bibr">107</a>].</p>
Full article ">Figure 6
<p>(<b>a</b>) Representative maps for one, two, and four shots in medial and lateral knee cartilages for PD images, <math display="inline"><semantics> <mrow> <msub> <mrow> <mi mathvariant="normal">T</mi> </mrow> <mrow> <mn>1</mn> </mrow> </msub> </mrow> </semantics></math>, <math display="inline"><semantics> <mrow> <msub> <mrow> <mi mathvariant="normal">T</mi> </mrow> <mrow> <mn>2</mn> </mrow> </msub> </mrow> </semantics></math>, <math display="inline"><semantics> <mrow> <msub> <mrow> <mi mathvariant="normal">T</mi> </mrow> <mrow> <mn>1</mn> <mi mathvariant="sans-serif">ρ</mi> </mrow> </msub> </mrow> </semantics></math>, and ΔB1 + maps. The ROIs are shown in the shot one PD images. (<b>b</b>) The variation in parametric values for the knee between controls and OA subjects. The figures are derived from [<a href="#B86-bioengineering-11-00236" class="html-bibr">86</a>].</p>
Full article ">Figure 7
<p>Region of interest (ROI) analysis. A cancer-suspicious lesion (white arrow) was identified via axial <math display="inline"><semantics> <mrow> <msub> <mrow> <mi mathvariant="normal">T</mi> </mrow> <mrow> <mn>2</mn> </mrow> </msub> </mrow> </semantics></math>-weighted (<math display="inline"><semantics> <mrow> <msub> <mrow> <mi mathvariant="normal">T</mi> </mrow> <mrow> <mn>2</mn> </mrow> </msub> <mi mathvariant="normal">w</mi> </mrow> </semantics></math>) acquisition, as shown in image (<b>A</b>). Image (<b>B</b>) is the apparent diffusion coefficient (ADC) map. (<b>C</b>,<b>D</b>) are the images corresponding to <math display="inline"><semantics> <mrow> <msub> <mrow> <mi mathvariant="normal">T</mi> </mrow> <mrow> <mn>2</mn> </mrow> </msub> </mrow> </semantics></math> and <math display="inline"><semantics> <mrow> <msub> <mrow> <mi mathvariant="normal">T</mi> </mrow> <mrow> <mn>1</mn> </mrow> </msub> </mrow> </semantics></math> parametric maps estimated from the MRF acquisition. (<b>A</b>–<b>D</b>) are coregistered. The solid circles in (<b>B</b>–<b>D</b>) correspond to the cancer-suspicious lesion. The dashed circles in (<b>B</b>–<b>D</b>) correspond to the visually Normal Transition Zone (NTZ). (<b>b</b>) Box-and-whisker plots for NTZ vs. non-cancerous lesions vs. cancerous legions for ADC, <math display="inline"><semantics> <mrow> <msub> <mrow> <mi mathvariant="normal">T</mi> </mrow> <mrow> <mn>1</mn> </mrow> </msub> </mrow> </semantics></math>, and <math display="inline"><semantics> <mrow> <msub> <mrow> <mi mathvariant="normal">T</mi> </mrow> <mrow> <mn>2</mn> </mrow> </msub> </mrow> </semantics></math> parametric maps. This figure is adapted with permission from [<a href="#B110-bioengineering-11-00236" class="html-bibr">110</a>], Radiological Society of North America.</p>
Full article ">

Other

Jump to: Research, Review

10 pages, 2196 KiB  
Brief Report
Bi-Exponential 3D UTE-T1ρ Relaxation Mapping of Ex Vivo Human Knee Patellar Tendon at 3T
by Bhavsimran Singh Malhi, Dina Moazamian, Soo Hyun Shin, Jiyo S. Athertya, Livia Silva, Saeed Jerban, Hyungseok Jang, Eric Chang, Yajun Ma, Michael Carl and Jiang Du
Bioengineering 2024, 11(1), 66; https://doi.org/10.3390/bioengineering11010066 - 9 Jan 2024
Cited by 2 | Viewed by 1407
Abstract
Introduction: The objective of this study was to assess the bi-exponential relaxation times and fractions of the short and long components of the human patellar tendon ex vivo using three-dimensional ultrashort echo time T1ρ (3D UTE-T1ρ) imaging. Materials and Methods: Five [...] Read more.
Introduction: The objective of this study was to assess the bi-exponential relaxation times and fractions of the short and long components of the human patellar tendon ex vivo using three-dimensional ultrashort echo time T1ρ (3D UTE-T1ρ) imaging. Materials and Methods: Five cadaveric human knee specimens were scanned using a 3D UTE-T1ρ imaging sequence on a 3T MR scanner. A series of 3D UTE-T1ρ images were acquired and fitted using single-component and bi-component models. Single-component exponential fitting was performed to measure the UTE-T1ρ value of the patellar tendon. Bi-component analysis was performed to measure the short and long UTE-T1ρ values and fractions. Results: The single-component analysis showed a mean single-component UTE-T1ρ value of 8.4 ± 1.7 ms for the five knee patellar tendon samples. Improved fitting was achieved with bi-component analysis, which showed a mean short UTE-T1ρ value of 5.5 ± 0.8 ms with a fraction of 77.6 ± 4.8%, and a mean long UTE-T1ρ value of 27.4 ± 3.8 ms with a fraction of 22.4 ± 4.8%. Conclusion: The 3D UTE-T1ρ sequence can detect the single- and bi-exponential decay in the patellar tendon. Bi-component fitting was superior to single-component fitting. Full article
(This article belongs to the Special Issue Novel MRI Techniques and Biomedical Image Processing)
Show Figures

Figure 1

Figure 1
<p>Sequence diagram. The UTE-T1ρ sequence includes magnetization reset, fat saturation, T1p preparation, and UTE data acquisition. The T1p preparation module includes two different durations of spin-locking time (TSL) (<b>top</b> vs. <b>bottom</b>).</p>
Full article ">Figure 2
<p>Localization of ROI (in blue) in the central region of the patellar tendon on a midsagittal slice.</p>
Full article ">Figure 3
<p>Comparison of mono-exponential (<b>A</b>,<b>D</b>) and corresponding bi-exponential fitting (<b>B</b>,<b>E</b>) and their residuals (<b>C</b>,<b>F</b>) for two knee samples. The bi-exponential model provides much improved fitting with greatly reduced residuals than the mono-exponential model.</p>
Full article ">Figure 4
<p>Representative T1ρ color maps in the patellar tendon of three ex vivo knees. Mono-exponential T1p relaxation maps (<b>A</b>,<b>E</b>,<b>I</b>), and bi-exponential relaxation maps of the short T1ρ component (<b>B</b>,<b>F</b>,<b>J</b>), the long T1p component (<b>C</b>,<b>G</b>,<b>K</b>), and the short component fraction (<b>D</b>,<b>H</b>,<b>L</b>).</p>
Full article ">
Back to TopTop