[go: up one dir, main page]
More Web Proxy on the site http://driver.im/
You seem to have javascript disabled. Please note that many of the page functionalities won't work as expected without javascript enabled.
 
 
Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

Article Types

Countries / Regions

Search Results (31)

Search Parameters:
Keywords = binary CAM

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
18 pages, 4280 KiB  
Article
Language-Guided Semantic Clustering for Remote Sensing Change Detection
by Shenglong Hu, Yiting Bian, Bin Chen, Huihui Song and Kaihua Zhang
Sensors 2024, 24(24), 7887; https://doi.org/10.3390/s24247887 - 10 Dec 2024
Viewed by 500
Abstract
Existing learning-based remote sensing change detection (RSCD) commonly uses semantic-agnostic binary masks as supervision, which hinders their ability to distinguish between different semantic types of changes, resulting in a noisy change mask prediction. To address this issue, this paper presents a Language-guided semantic [...] Read more.
Existing learning-based remote sensing change detection (RSCD) commonly uses semantic-agnostic binary masks as supervision, which hinders their ability to distinguish between different semantic types of changes, resulting in a noisy change mask prediction. To address this issue, this paper presents a Language-guided semantic clustering framework that can effectively transfer the rich semantic information from the contrastive language-image pretraining (CLIP) model for RSCD, dubbed LSC-CD. The LSC-CD considers the strong zero-shot generalization of the CLIP, which makes it easy to transfer the semantic knowledge from the CLIP into the CD model under semantic-agnostic binary mask supervision. Specifically, the LSC-CD first constructs a category text-prior memory bank based on the dataset statistics and then leverages the CLIP to transform the text in the memory bank into the corresponding semantic embeddings. Afterward, a CLIP adapter module (CAM) is designed to fine-tune the semantic embeddings to align with the change region embeddings from the input bi-temporal images. Next, a semantic clustering module (SCM) is designed to cluster the change region embeddings around the semantic embeddings, yielding the compact change embeddings that are robust to noisy backgrounds. Finally, a lightweight decoder is designed to decode the compact change embeddings, yielding an accurate change mask prediction. Experimental results on three public benchmarks including LEVIR-CD, WHU-CD, and SYSU-CD demonstrate that the proposed LSC-CD achieves state-of-the-art performance in terms of all evaluated metrics. Full article
(This article belongs to the Special Issue Image Processing and Analysis for Object Detection: 2nd Edition)
Show Figures

Figure 1

Figure 1
<p>Structural comparison between semantic-agnostic dominant methods [<a href="#B26-sensors-24-07887" class="html-bibr">26</a>,<a href="#B28-sensors-24-07887" class="html-bibr">28</a>,<a href="#B29-sensors-24-07887" class="html-bibr">29</a>,<a href="#B34-sensors-24-07887" class="html-bibr">34</a>,<a href="#B36-sensors-24-07887" class="html-bibr">36</a>,<a href="#B37-sensors-24-07887" class="html-bibr">37</a>] and the proposed LSC-CD. Compared to the disorder of the dominant methods in visual space, the LSC-CD obtains more orderly and compact semantic embeddings in visual-semantic space through clustering.</p>
Full article ">Figure 2
<p>The pipeline of the proposed LSC-CD. The Transformer Block is the multi-scale transformer encoder from Segformer [<a href="#B44-sensors-24-07887" class="html-bibr">44</a>].</p>
Full article ">Figure 3
<p>Architecture of the proposed SCM.</p>
Full article ">Figure 4
<p>Architecture of the SEUM in the SCM.</p>
Full article ">Figure 5
<p>Qualitative comparison results of different CD methods on LEVIR-CD datasets: the black represents true negative, the white represents true positive, the <span style="color: #FF0000">red</span> represents false positive and the <span style="color: #00FF00">green</span> represents false negative.</p>
Full article ">Figure 6
<p>Qualitative comparison results of different CD methods on WHU-CD datasets: the black represents true negative, the white represents true positive, the <span style="color: #FF0000">red</span> represents false positive and the <span style="color: #00FF00">green</span> represents false negative.</p>
Full article ">Figure 7
<p>Qualitative comparison results of different CD methods on SYSU-CD datasets: the black represents true negative, the white represents true positive, the <span style="color: #FF0000">red</span> represents false positive, and the <span style="color: #00FF00">green</span> represents false negative.</p>
Full article ">Figure 8
<p>Heatmap comparison results. TBlock1–TBlock4 represent attention maps at four different scales from the encoder [<a href="#B44-sensors-24-07887" class="html-bibr">44</a>]. The 1st and 3rd rows show the pre-change and post-change heatmaps of the baseline (ChangeFormer), while the 2nd and 4th rows show the pre-change and post-change heatmaps of LSC-CD.</p>
Full article ">
17 pages, 4832 KiB  
Article
Atrial Fibrillation Type Classification by a Convolutional Neural Network Using Contrast-Enhanced Computed Tomography Images
by Hina Kotani, Atsushi Teramoto, Tomoyuki Ohno, Yoshihiro Sobue, Eiichi Watanabe and Hiroshi Fujita
Computers 2024, 13(12), 309; https://doi.org/10.3390/computers13120309 - 24 Nov 2024
Viewed by 606
Abstract
Catheter ablation therapy, which is a treatment for atrial fibrillation (AF), has a higher recurrence rate as AF duration increases. Compared to paroxysmal AF (PAF), sustained AF is known to cause progressive anatomic remodeling of the left atrium, resulting in enlargement and shape [...] Read more.
Catheter ablation therapy, which is a treatment for atrial fibrillation (AF), has a higher recurrence rate as AF duration increases. Compared to paroxysmal AF (PAF), sustained AF is known to cause progressive anatomic remodeling of the left atrium, resulting in enlargement and shape changes. In this study, we used contrast-enhanced computed tomography (CT) to classify atrial fibrillation (AF) into paroxysmal atrial fibrillation (PAF) and long-term persistent atrial fibrillation (LSAF), which have particularly different recurrence rates after catheter ablation. Contrast-enhanced CT images of 30 patients with PAF and 30 patients with LSAF were input into six pretrained convolutional neural networks (CNNs) for the binary classification of PAF and LSAF. In this study, we propose a method that can recognize information regarding the body axis direction of the left atrium by inputting five slices near the left atrium. The classification was visualized by obtaining a saliency map based on score-class activation mapping (CAM). Furthermore, we surveyed cardiologists regarding the classification of AF types, and the results of the CNN classification were compared with the results of physicians’ clinical judgment. The proposed method achieved the highest correct classification rate (81.7%). In particular, models with shallow layers, such as VGGNet and ResNet, are able to capture the overall characteristics of the image and therefore are likely to be suitable for focusing on the left atrium. In many cases, patients with an enlarged left atrium tended to have long-lasting AF, confirming the validity of the proposed method. The results of the saliency map and survey of physicians’ basis for judgment showed that many patients tended to focus on the shape of the left atrium in both classifications, suggesting that this method can classify atrial fibrillation more accurately than physicians, similar to the judgment criteria of physicians. Full article
(This article belongs to the Special Issue Advanced Image Processing and Computer Vision)
Show Figures

Graphical abstract

Graphical abstract
Full article ">Figure 1
<p>Process of this study.</p>
Full article ">Figure 2
<p>Examples of an original image and images created using data augmentation.</p>
Full article ">Figure 3
<p>An example of visualization of decision basis in CNN (score-CAM). (<b>a</b>) Input image; (<b>b</b>) saliency map image.</p>
Full article ">Figure 4
<p>Data assignment in the 10-part cross-validation method.</p>
Full article ">Figure 5
<p>ROC curves of CNN models.</p>
Full article ">Figure 6
<p>Comparison of proposed method and additional study.</p>
Full article ">Figure 7
<p>Correctly classified cases. (<b>a</b>) PAF; (<b>b</b>) LSAF.</p>
Full article ">Figure 8
<p>Incorrectly classified cases. (<b>a</b>) PAF; (<b>b</b>) LSAF.</p>
Full article ">Figure 9
<p>Saliency maps of correctly classified cases. (<b>a</b>) PAF; (<b>b</b>) LSAF.</p>
Full article ">Figure 10
<p>Saliency maps of incorrectly classified cases. (<b>a</b>) PAF; (<b>b</b>) LSAF.</p>
Full article ">Figure 11
<p>Physicians’ classification results and comparison between CNN models.</p>
Full article ">Figure 12
<p>ROC curves of physicians.</p>
Full article ">Figure 13
<p>LSAF cases with different results between physicians and the proposed method. (<b>a</b>) Correctly classified only by CNN model; (<b>b</b>) correctly classified only by physician.</p>
Full article ">
16 pages, 356 KiB  
Article
Use of Complementary and Alternative Therapies in People with Inflammatory Bowel Disease
by Laura Frank and Kelly Lambert
Int. J. Environ. Res. Public Health 2024, 21(9), 1140; https://doi.org/10.3390/ijerph21091140 - 28 Aug 2024
Viewed by 1039
Abstract
Complementary and alternative medicines (CAMs) are frequently discussed by people with Inflammatory Bowel Disease (IBD). The aim of this study is to explore CAM use in Australians with IBD. This cross-sectional study was conducted via an anonymous online survey, predominantly distributed through IBD-specific [...] Read more.
Complementary and alternative medicines (CAMs) are frequently discussed by people with Inflammatory Bowel Disease (IBD). The aim of this study is to explore CAM use in Australians with IBD. This cross-sectional study was conducted via an anonymous online survey, predominantly distributed through IBD-specific social media accounts. Data collection occurred over a three-month period in 2021. Descriptive statistics, Chi-Square tests, and binary logistic regression were used to analyse quantitative data. A simple thematic analysis was conducted for qualitative free-text responses. Of the 123 responses, acupuncture (12.2%) and chiropractors (8.9%) were common CAM practitioners accessed. CAM practitioners were perceived to be ‘very helpful’ compared to mainstream health practitioners. The most common CAM products reported were vitamins (51.2%), probiotics (43.9%), and herbal medicine (30.9%). Common reasons for use were improved perceived improvements to wellbeing or for long-term management of IBD. Females were more likely to access CAM practitioners (OR 12.6, 95% CI 1.62–98.1, p = 0.02). Doctors were the participants’ primary source of information (64.2%), although many expressed dissatisfaction with conventional therapy and the desire for a more holistic approach to care. The use of CAMs in this sample was high. Limited research into the efficacy and safety of these therapies may prevent health professionals from discussing their use with patients. Improved communication with health professionals will allow patients to be active partners in their healthcare plans and can heighten patient satisfaction with conventional therapy. Full article
(This article belongs to the Section Health Care Sciences)
20 pages, 4347 KiB  
Article
Automatic Classification of Nodules from 2D Ultrasound Images Using Deep Learning Networks
by Tewele W. Tareke, Sarah Leclerc, Catherine Vuillemin, Perrine Buffier, Elodie Crevisy, Amandine Nguyen, Marie-Paule Monnier Meteau, Pauline Legris, Serge Angiolini and Alain Lalande
J. Imaging 2024, 10(8), 203; https://doi.org/10.3390/jimaging10080203 - 22 Aug 2024
Viewed by 1396
Abstract
Objective: In clinical practice, thyroid nodules are typically visually evaluated by expert physicians using 2D ultrasound images. Based on their assessment, a fine needle aspiration (FNA) may be recommended. However, visually classifying thyroid nodules from ultrasound images may lead to unnecessary fine needle [...] Read more.
Objective: In clinical practice, thyroid nodules are typically visually evaluated by expert physicians using 2D ultrasound images. Based on their assessment, a fine needle aspiration (FNA) may be recommended. However, visually classifying thyroid nodules from ultrasound images may lead to unnecessary fine needle aspirations for patients. The aim of this study is to develop an automatic thyroid ultrasound image classification system to prevent unnecessary FNAs. Methods: An automatic computer-aided artificial intelligence system is proposed for classifying thyroid nodules using a fine-tuned deep learning model based on the DenseNet architecture, which incorporates an attention module. The dataset comprises 591 thyroid nodule images categorized based on the Bethesda score. Thyroid nodules are classified as either requiring FNA or not. The challenges encountered in this task include managing variability in image quality, addressing the presence of artifacts in ultrasound image datasets, tackling class imbalance, and ensuring model interpretability. We employed techniques such as data augmentation, class weighting, and gradient-weighted class activation maps (Grad-CAM) to enhance model performance and provide insights into decision making. Results: Our approach achieved excellent results with an average accuracy of 0.94, F1-score of 0.93, and sensitivity of 0.96. The use of Grad-CAM gives insights on the decision making and then reinforce the reliability of the binary classification for the end-user perspective. Conclusions: We propose a deep learning architecture that effectively classifies thyroid nodules as requiring FNA or not from ultrasound images. Despite challenges related to image variability, class imbalance, and interpretability, our method demonstrated a high classification accuracy with minimal false negatives, showing its potential to reduce unnecessary FNAs in clinical settings. Full article
(This article belongs to the Section Medical Imaging)
Show Figures

Figure 1

Figure 1
<p>Raw ultrasound image of the thyroid nodule, with noise and artifacts covering part of the textures.</p>
Full article ">Figure 2
<p>Post-processed image after artefact removal and histogram equalization.</p>
Full article ">Figure 3
<p>Overview of the proposed pipeline: Deep convolutional neural network (DCNN) model for thyroid nodule image classification. Data augmentation was only applied during training while confidence level analysis and explainability were only conducted during the testing phase.</p>
Full article ">Figure 4
<p>Block diagram of DenseNet121 architecture with incorporated modules, in particular, attention modules.</p>
Full article ">Figure 5
<p>Example of nodule classification: The AI model establishes a 92% probability from the image that the nodule does not require FNA, indicating that FNA is not necessary. The surrounding rectangle indicates the location of the nodule.</p>
Full article ">Figure 6
<p>Example of nodule classification. The probability established by the model from the image that the nodule needs FNA is equal to 84%. The surrounding rectangle indicates the location of the nodule.</p>
Full article ">Figure 7
<p>The confusion matrix depicts the proposed approach’s classification of thyroid nodules on the test set. Only four images were misclassified by the model. “FNA required” denotes the necessity for fine needle aspiration for the respective nodule, while “FNA not required” indicates that fine needle aspiration is not necessary.</p>
Full article ">Figure 8
<p>Examples of predicted images with Grad-CAM overlaid on the initial US image. The heatmap highlights regions in the original images that were important for the model’s prediction.</p>
Full article ">Figure 9
<p>Matching of the Grad-CAM maps and the localization of the module. The areas surrounded by white rectangles indicate the location of nodules identified by experts. Grad-CAM localizes the areas considered by the automatic system as regions of interest.</p>
Full article ">
12 pages, 3665 KiB  
Article
Periods of Outbursts and Standstills and Variations in Parameters of Two Z Cam Stars: Z Cam and AT Cnc
by Daniela Boneva, Krasimira Yankova and Denislav Rusev
Astronomy 2024, 3(3), 208-219; https://doi.org/10.3390/astronomy3030013 - 1 Aug 2024
Viewed by 887
Abstract
We present our results on two Z Cam stars: Z Cam and AT Cnc. We apply observational data for the periods that cover the states of outbursts and standstills, which are typical for this type of object. We report an appearance of periodic [...] Read more.
We present our results on two Z Cam stars: Z Cam and AT Cnc. We apply observational data for the periods that cover the states of outbursts and standstills, which are typical for this type of object. We report an appearance of periodic oscillations in brightness during the standstill in AT Cnc, with small-amplitude variations of 0.03–0.04 mag and periodicity of ≈20–30 min. Based on the estimated dereddened color index (B − V)0, we calculate the color temperature for both states of the two objects. During the transition from the outburst to the standstill state, Z Cam varies from bluer to redder, while AT Cnc stays redder in both states. We calculate some of the stars’ parameters as the radii of the primary and secondary components and the orbital separation for both objects. We construct the profiles of the effective temperature in the discs of the two objects. Comparing the parameters of both systems, we see that Z Cam is definitely the hotter object and we conclude that it has a more active accretion disc. Full article
Show Figures

Figure 1

Figure 1
<p>Light curves of Z Cam. The long period observations in BVRI bands (2458100–2459900 JD) (<b>a</b>) show the transitions between the large-amplitude variations in brightness (outbursts, 2458100–2458400) and the periods of quiescence in the standstills (marked with dark blue dashed arrows). The two states are separated: (<b>b</b>) for the outbursts period; (<b>c</b>) for the standstill period. The data are taken from AAVSO (Z Cam’s most contributed observers codes: HKEB, SGEA, NOT, RFDA, LPAC, DGSA).</p>
Full article ">Figure 1 Cont.
<p>Light curves of Z Cam. The long period observations in BVRI bands (2458100–2459900 JD) (<b>a</b>) show the transitions between the large-amplitude variations in brightness (outbursts, 2458100–2458400) and the periods of quiescence in the standstills (marked with dark blue dashed arrows). The two states are separated: (<b>b</b>) for the outbursts period; (<b>c</b>) for the standstill period. The data are taken from AAVSO (Z Cam’s most contributed observers codes: HKEB, SGEA, NOT, RFDA, LPAC, DGSA).</p>
Full article ">Figure 2
<p>Light curves of AT Cnc. The long period observations in BVRI bands (2458080–2458240 JD) show the transitions between the large-amplitude variations in brightness (outbursts) and the period of quiescence in the high state (standstill) (<b>a</b>). The standstill state is shown in (<b>b</b>), and the zoomed oscillations in brightness during this period in (<b>c</b>). The data are taken from AAVSO (AT Cnc’s most contributed observers’ codes: CMJA, MZK, SGEA, PSD).</p>
Full article ">Figure 3
<p>The effective temperatures profiles of Z Cam and AT Cnc against the accretion disc’s radius. Two profiles are presented separately in (<b>a</b>,<b>b</b>). A comparison of the two objects profiles is seen in (<b>c</b>) for the thin model and (<b>d</b>) for the advective model.</p>
Full article ">
23 pages, 8470 KiB  
Article
Leveraging Deep Learning for Fine-Grained Categorization of Parkinson’s Disease Progression Levels through Analysis of Vocal Acoustic Patterns
by Hadi Sedigh Malekroodi, Nuwan Madusanka, Byeong-il Lee and Myunggi Yi
Bioengineering 2024, 11(3), 295; https://doi.org/10.3390/bioengineering11030295 - 21 Mar 2024
Cited by 4 | Viewed by 2253
Abstract
Speech impairments often emerge as one of the primary indicators of Parkinson’s disease (PD), albeit not readily apparent in its early stages. While previous studies focused predominantly on binary PD detection, this research explored the use of deep learning models to automatically classify [...] Read more.
Speech impairments often emerge as one of the primary indicators of Parkinson’s disease (PD), albeit not readily apparent in its early stages. While previous studies focused predominantly on binary PD detection, this research explored the use of deep learning models to automatically classify sustained vowel recordings into healthy controls, mild PD, or severe PD based on motor symptom severity scores. Popular convolutional neural network (CNN) architectures, VGG and ResNet, as well as vision transformers, Swin, were fine-tuned on log mel spectrogram image representations of the segmented voice data. Furthermore, the research investigated the effects of audio segment lengths and specific vowel sounds on the performance of these models. The findings indicated that implementing longer segments yielded better performance. The models showed strong capability in distinguishing PD from healthy subjects, achieving over 95% precision. However, reliably discriminating between mild and severe PD cases remained challenging. The VGG16 achieved the best overall classification performance with 91.8% accuracy and the largest area under the ROC curve. Furthermore, focusing analysis on the vowel /u/ could further improve accuracy to 96%. Applying visualization techniques like Grad-CAM also highlighted how CNN models focused on localized spectrogram regions while transformers attended to more widespread patterns. Overall, this work showed the potential of deep learning for non-invasive screening and monitoring of PD progression from voice recordings, but larger multi-class labeled datasets are needed to further improve severity classification. Full article
Show Figures

Graphical abstract

Graphical abstract
Full article ">Figure 1
<p>The workflow diagram of our classification system.</p>
Full article ">Figure 2
<p>The histogram illustrates the distribution of audio lengths across three groups: HC, PD_Mild, and PD_Severe. Most audio samples are around 5 s in length, with a count exceeding 150.</p>
Full article ">Figure 3
<p>Overview of the process used to construct distinct datasets from the original dataset.</p>
Full article ">Figure 4
<p>Speech sound examples. The upper panel in each example shows the acoustic waveform. The lower panel shows the corresponding log mel spectrogram representation (128 mel-bands).</p>
Full article ">Figure 5
<p>The effects of data augmentations on LMSs: (<b>a</b>) displays the original LMS without any augmentations; (<b>b</b>) shows the LMS with time masking applied, which masks blocks of time steps. This forces the model to rely more on context; image (<b>c</b>) shows the LMS with frequency masking applied, which masks blocks of frequencies; and (<b>d</b>) demonstrates the combination of these augmentations.</p>
Full article ">Figure 6
<p>Overview of the architecture of models used in this research.</p>
Full article ">Figure 7
<p>Bar chart showcasing the average accuracy of studied models across modified datasets, with error bars representing the standard deviation (SD). For a clear comparison, the accuracy scale begins at 70%.</p>
Full article ">Figure 8
<p>The cumulative confusion matrices and ROC curves show the performance of each model across three folds of cross-validation on the dataset limited to only the FS-5 dataset.</p>
Full article ">Figure 9
<p>The cumulative confusion matrices and ROC curves show the performance of each model across three folds of cross-validation on the dataset limited to the AS-5 dataset.</p>
Full article ">Figure 10
<p>The cumulative confusion matrix for each sustained vowel recording for the VGG16 model. Color bars display the proportion of observations within each class that were correctly or incorrectly classified, with values ranging from 0 to 1.</p>
Full article ">Figure 11
<p>Cumulative Confusion matrix for each model after applying majority voting to predictions on the AS-5 dataset. Color bars display the proportion of observations within each class that were correctly or incorrectly classified, with values ranging from 0 to 1.</p>
Full article ">Figure 12
<p>Grad-CAM visualization features different models across various classes for specific vowel /o/.</p>
Full article ">Figure 13
<p>Visualization of feature space in 2D using t-SNE for each model.</p>
Full article ">
21 pages, 4117 KiB  
Article
COVID-19 Detection and Diagnosis Model on CT Scans Based on AI Techniques
by Maria-Alexandra Zolya, Cosmin Baltag, Dragoș-Vasile Bratu, Simona Coman and Sorin-Aurel Moraru
Bioengineering 2024, 11(1), 79; https://doi.org/10.3390/bioengineering11010079 - 14 Jan 2024
Cited by 1 | Viewed by 1758
Abstract
The end of 2019 could be mounted in a rudimentary framing of a new medical problem, which globally introduces into the discussion a fulminant outbreak of coronavirus, consequently spreading COVID-19 that conducted long-lived and persistent repercussions. Hence, the theme proposed to be solved [...] Read more.
The end of 2019 could be mounted in a rudimentary framing of a new medical problem, which globally introduces into the discussion a fulminant outbreak of coronavirus, consequently spreading COVID-19 that conducted long-lived and persistent repercussions. Hence, the theme proposed to be solved arises from the field of medical imaging, where a pulmonary CT-based standardized reporting system could be addressed as a solution. The core of it focuses on certain impediments such as the overworking of doctors, aiming essentially to solve a classification problem using deep learning techniques, namely, if a patient suffers from COVID-19, viral pneumonia, or is healthy from a pulmonary point of view. The methodology’s approach was a meticulous one, denoting an empirical character in which the initial stage, given using data processing, performs an extraction of the lung cavity from the CT scans, which is a less explored approach, followed by data augmentation. The next step is comprehended by developing a CNN in two scenarios, one in which there is a binary classification (COVID and non-COVID patients), and the other one is represented by a three-class classification. Moreover, viral pneumonia is addressed. To obtain an efficient version, architectural changes were gradually made, involving four databases during this process. Furthermore, given the availability of pre-trained models, the transfer learning technique was employed by incorporating the linear classifier from our own convolutional network into an existing model, with the result being much more promising. The experimentation encompassed several models including MobileNetV1, ResNet50, DenseNet201, VGG16, and VGG19. Through a more in-depth analysis, using the CAM technique, MobilneNetV1 differentiated itself via the detection accuracy of possible pulmonary anomalies. Interestingly, this model stood out as not being among the most used in the literature. As a result, the following values of evaluation metrics were reached: loss (0.0751), accuracy (0.9744), precision (0.9758), recall (0.9742), AUC (0.9902), and F1 score (0.9750), from 1161 samples allocated for each of the three individual classes. Full article
(This article belongs to the Special Issue Artificial Intelligence in Biomedical Diagnosis and Prognosis)
Show Figures

Graphical abstract

Graphical abstract
Full article ">Figure 1
<p>Chest radiographsof a 46-year-old female patient with fever and dry cough [<a href="#B9-bioengineering-11-00079" class="html-bibr">9</a>]. Axial chest computed tomography shows bilateral multifocal ground glass opacities (arrows; (<b>A</b>,<b>B</b>)), peribronchial interstitial thickening (arrowhead; (<b>B</b>)) and reticular opacities (curved arrows; (<b>B</b>)), consistent with coronavirus disease 2019 pneumonia.</p>
Full article ">Figure 2
<p>A batch of 20 samples from the resized and augmented SARS-CoV-2 Ct Scan Dataset.</p>
Full article ">Figure 3
<p>Applying the Gaussian filter to a sample from the COVID-19-CT dataset. (<b>a</b>) Original CT. (<b>b</b>) Dataset after applying the filter. (<b>c</b>) Probability distribution with zero standard deviation.</p>
Full article ">Figure 4
<p>A batch of 20 samples from the resized and augmented COVID-19-CT dataset.</p>
Full article ">Figure 5
<p>The final neural architecture of the binary CNN model.</p>
Full article ">Figure 6
<p>The neural architecture of the three-class classification CNN model.</p>
Full article ">Figure 7
<p>CAM applied to preprocessed COVID-19 CT using VGG16 model.</p>
Full article ">Figure 8
<p>The results of the binary model proposed on the COVID-19-CT dataset. (<b>a</b>) ROC plot. (<b>b</b>) Confusion matrix.</p>
Full article ">Figure 9
<p>Best performing sequential multi-class model.</p>
Full article ">Figure 10
<p>The results of the multi-class model conceived on the COVID-19-CT dataset. (<b>a</b>) Training curve. (<b>b</b>) Loss curve. (<b>c</b>) Confusion matrix.</p>
Full article ">Figure 11
<p>Pre-processed data using VGG19 pre-trained model.</p>
Full article ">Figure 12
<p>Applying CAM over samples. (<b>a</b>) MobileNetv1, (<b>b</b>) ResNet50, and (<b>c</b>) VGG19.</p>
Full article ">Figure 13
<p>The results of the final proposed CNN model for detecting and diagnosing patients. (<b>a</b>) Training curve, (<b>b</b>) loss curve, and (<b>c</b>) confusion matrix.</p>
Full article ">
42 pages, 13997 KiB  
Article
Multi-Scale CNN: An Explainable AI-Integrated Unique Deep Learning Framework for Lung-Affected Disease Classification
by Ovi Sarkar, Md. Robiul Islam, Md. Khalid Syfullah, Md. Tohidul Islam, Md. Faysal Ahamed, Mominul Ahsan and Julfikar Haider
Technologies 2023, 11(5), 134; https://doi.org/10.3390/technologies11050134 - 30 Sep 2023
Cited by 8 | Viewed by 4012
Abstract
Lung-related diseases continue to be a leading cause of global mortality. Timely and precise diagnosis is crucial to save lives, but the availability of testing equipment remains a challenge, often coupled with issues of reliability. Recent research has highlighted the potential of Chest [...] Read more.
Lung-related diseases continue to be a leading cause of global mortality. Timely and precise diagnosis is crucial to save lives, but the availability of testing equipment remains a challenge, often coupled with issues of reliability. Recent research has highlighted the potential of Chest X-ray (CXR) images in identifying various lung diseases, including COVID-19, fibrosis, pneumonia, and more. In this comprehensive study, four publicly accessible datasets have been combined to create a robust dataset comprising 6650 CXR images, categorized into seven distinct disease groups. To effectively distinguish between normal and six different lung-related diseases (namely, bacterial pneumonia, COVID-19, fibrosis, lung opacity, tuberculosis, and viral pneumonia), a Deep Learning (DL) architecture called a Multi-Scale Convolutional Neural Network (MS-CNN) is introduced. The model is adapted to classify multiple numbers of lung disease classes, which is considered to be a persistent challenge in the field. While prior studies have demonstrated high accuracy in binary and limited-class scenarios, the proposed framework maintains this accuracy across a diverse range of lung conditions. The innovative model harnesses the power of combining predictions from multiple feature maps at different resolution scales, significantly enhancing disease classification accuracy. The approach aims to shorten testing duration compared to the state-of-the-art models, offering a potential solution toward expediting medical interventions for patients with lung-related diseases and integrating explainable AI (XAI) for enhancing prediction capability. The results demonstrated an impressive accuracy of 96.05%, with average values for precision, recall, F1-score, and AUC at 0.97, 0.95, 0.95, and 0.94, respectively, for the seven-class classification. The model exhibited exceptional performance across multi-class classifications, achieving accuracy rates of 100%, 99.65%, 99.21%, 98.67%, and 97.47% for two, three, four, five, and six-class scenarios, respectively. The novel approach not only surpasses many pre-existing state-of-the-art (SOTA) methodologies but also sets a new standard for the diagnosis of lung-affected diseases using multi-class CXR data. Furthermore, the integration of XAI techniques such as SHAP and Grad-CAM enhanced the transparency and interpretability of the model’s predictions. The findings hold immense promise for accelerating and improving the accuracy and confidence of diagnostic decisions in the field of lung disease identification. Full article
(This article belongs to the Special Issue Medical Imaging & Image Processing III)
Show Figures

Figure 1

Figure 1
<p>A schematic of the overall Multi-scale CNN system architecture.</p>
Full article ">Figure 2
<p>Sample Images of the dataset: (<b>a</b>) Bacterial Pneumonia, (<b>b</b>) COVID-19, (<b>c</b>) Fibrosis, (<b>d</b>) Lung Opacity, (<b>e</b>) Normal, (<b>f</b>) Tuberculosis, and (<b>g</b>) Viral Pneumonia.</p>
Full article ">Figure 3
<p>Block diagram of Multi-scale CNN architecture.</p>
Full article ">Figure 4
<p>(<b>a</b>) Accuracy curves, (<b>b</b>) Loss curves, (<b>c</b>) Confusion Matrix, and (<b>d</b>) ROC curves of the proposed Multi-Scale CNN model with two individual classes (COVID and Normal).</p>
Full article ">Figure 5
<p>Average Precision (%), Average Recall (%), Average F1-Score (%), Average AUC (%), and Accuracy of the different models with two individual classes (COVID and Normal) for Dataset 1.</p>
Full article ">Figure 6
<p>(<b>a</b>) Accuracy curves, (<b>b</b>) Loss curves, (<b>c</b>) Confusion Matrix, and (<b>d</b>) ROC curves of the proposed Multi-Scale CNN model with three individual classes (COVID, Normal, and Fibrosis).</p>
Full article ">Figure 7
<p>Average Precision (%), Average Recall (%), Average F1-Score (%), Average AUC (%), and Accuracy of the different models with three individual classes (COVID, Normal, and Fibrosis) for Dataset 2.</p>
Full article ">Figure 8
<p>(<b>a</b>) Accuracy curves, (<b>b</b>) Loss curves, (<b>c</b>) Confusion Matrix, and (<b>d</b>) ROC curves of the proposed Multi-Scale CNN model with three individual classes (COVID, Normal, and Tuberculosis).</p>
Full article ">Figure 9
<p>Average Precision (%), Average Recall (%), Average F1-Score (%), Average AUC (%), and Accuracy of the different models with three individual classes (COVID, Normal, and Tuberculosis) for Dataset 3.</p>
Full article ">Figure 10
<p>(<b>a</b>) Accuracy curves, (<b>b</b>) Loss curves, (<b>c</b>) Confusion Matrix, and (<b>d</b>) ROC curves of the proposed Multi-Scale CNN model with three individual classes (COVID, Bacterial Pneumonia, and Normal).</p>
Full article ">Figure 11
<p>Average Precision (%), Average Recall (%), Average F1-Score (%), Average AUC (%), and Accuracy of the different models with three individual classes (COVID, Bacterial Pneumonia, and Normal) for Dataset 4.</p>
Full article ">Figure 12
<p>(<b>a</b>) Accuracy curves, (<b>b</b>) Loss curves, (<b>c</b>) Confusion Matrix, and (<b>d</b>) ROC curves of the proposed Multi-Scale CNN model with four individual classes (COVID, Fibrosis, Normal, and Tuberculosis).</p>
Full article ">Figure 13
<p>Average Precision (%), Average Recall (%), Average F1-Score (%), Average AUC (%), and Accuracy of the different models with four individual classes (COVID, Fibrosis, Normal, and Tuberculosis) for Dataset 5.</p>
Full article ">Figure 14
<p>(<b>a</b>) Accuracy curves, (<b>b</b>) Loss curves, (<b>c</b>) Confusion Matrix, and (<b>d</b>) ROC curves of the proposed Multi-Scale CNN model with four individual classes (Bacterial Pneumonia, COVID, Fibrosis, and Normal).</p>
Full article ">Figure 15
<p>Average Precision (%), Average Recall (%), Average F1-Score (%), Average AUC (%), and Accuracy of the different models with four individual classes (Bacterial Pneumonia, COVID, Fibrosis, and Normal) for Dataset 6.</p>
Full article ">Figure 16
<p>(<b>a</b>) Accuracy curves, (<b>b</b>) Loss curves, (<b>c</b>) Confusion Matrix, and (<b>d</b>) ROC curves of the proposed Multi-Scale CNN model with four individual classes (Bacterial Pneumonia, COVID, Normal, and Tuberculosis).</p>
Full article ">Figure 17
<p>Average Precision (%), Average Recall (%), Average F1-Score (%), Average AUC (%), and Accuracy of the different models with four individual classes (Bacterial Pneumonia, COVID, Normal, and Tuberculosis) for Dataset 7.</p>
Full article ">Figure 18
<p>(<b>a</b>) Accuracy curves, (<b>b</b>) Loss curves, (<b>c</b>) Confusion Matrix, and (<b>d</b>) ROC curves of the proposed Multi-Scale CNN model with five individual classes (Bacterial Pneumonia, COVID, Fibrosis, Normal, and Tuberculosis).</p>
Full article ">Figure 19
<p>Average Precision (%), Average Recall (%), Average F1-Score (%), Average AUC (%), and Accuracy of the different models with five individual classes (Bacterial Pneumonia, COVID, Fibrosis, Normal, and Tuberculosis) for Dataset 8.</p>
Full article ">Figure 20
<p>(<b>a</b>) Accuracy curves, (<b>b</b>) Loss curves, (<b>c</b>) Confusion Matrix, and (<b>d</b>) ROC curves of the proposed Multi-Scale CNN model with six individual classes (Bacterial Pneumonia, COVID, Fibrosis, Normal, Tuberculosis, and Viral Pneumonia).</p>
Full article ">Figure 21
<p>Average Precision (%), Average Recall (%), Average F1-Score (%), Average AUC (%), and Accuracy of the different models with six individual classes (Bacterial Pneumonia, COVID, Fibrosis, Normal, Tuberculosis, and Viral Pneumonia) for Dataset 9.</p>
Full article ">Figure 22
<p>(<b>a</b>) Accuracy curves, (<b>b</b>) Loss curves, (<b>c</b>) Confusion Matrix, (<b>d</b>) ROC curves, and (<b>e</b>) Precision Recall curves of the proposed Multi-Scale CNN model with seven individual classes (Bacterial Pneumonia, COVID, Fibrosis, Lung Opacity, Normal, Tuberculosis, and Viral Pneumonia).</p>
Full article ">Figure 23
<p>Average Precision (%), Average Recall (%), Average F1-Score (%), Average AUC (%), and Accuracy of the different models with seven individual classes (Bacterial Pneumonia, COVID, Fibrosis, Lung Opacity, Normal, Tuberculosis, and Viral Pneumonia) for Dataset 10.</p>
Full article ">Figure 24
<p>SHAP Partition Explainer with image plot on a lung opacity sample; top two categories that the model thinks the sample belongs to are (<b>a</b>) Lung opacity and (<b>b</b>) COVID.</p>
Full article ">Figure 25
<p>SHAP partition Explainer with Image Plot on a Lung Opacity sample; Predictions on all seven categories where the model thinks the sample is (<b>a</b>) Bacterial Pneumonia, (<b>b</b>) COVID, (<b>c</b>) Fibrosis, (<b>d</b>) Lung Opacity, (<b>e</b>) Normal, (<b>f</b>) Tuberculosis, and (<b>g</b>) Viral Pneumonia.</p>
Full article ">Figure 26
<p>SHAP Partition Explainer with Image Plot on a Fibrosis sample; Predictions on all seven categories where the model thinks the sample is (<b>a</b>) Bacterial Pneumonia, (<b>b</b>) COVID, (<b>c</b>) Fibrosis, (<b>d</b>) Lung Opacity, (<b>e</b>) Normal, (<b>f</b>) Tuberculosis, and (<b>g</b>) Viral Pneumonia.</p>
Full article ">Figure 27
<p>Original CXR, Heatmap, and Super-imposed Grad-CAM image of Multi-scale CNN Model with two individual classes for one sample: (<b>a</b>) Fibrosis (on top) and (<b>b</b>) Tuberculosis (on bottom).</p>
Full article ">Figure 28
<p>Comparison of performance metrics for all the ten datasets from Class-2 to Class-7 obtained by Multi-Scale CNN for identifying lung-affected diseases.</p>
Full article ">Figure 29
<p>Comparison of computational time for all state-of-the-art (SOTA) models of Dataset 10.</p>
Full article ">
24 pages, 3641 KiB  
Article
Deep-Learning-Based Visualization and Volumetric Analysis of Fluid Regions in Optical Coherence Tomography Scans
by Harishwar Reddy Kasireddy, Udaykanth Reddy Kallam, Sowmitri Karthikeya Siddhartha Mantrala, Hemanth Kongara, Anshul Shivhare, Jayesh Saita, Sharanya Vijay, Raghu Prasad, Rajiv Raman and Chandra Sekhar Seelamantula
Diagnostics 2023, 13(16), 2659; https://doi.org/10.3390/diagnostics13162659 - 12 Aug 2023
Cited by 1 | Viewed by 1701
Abstract
Retinal volume computation is one of the critical steps in grading pathologies and evaluating the response to a treatment. We propose a deep-learning-based visualization tool to calculate the fluid volume in retinal optical coherence tomography (OCT) images. The pathologies under consideration are Intraretinal [...] Read more.
Retinal volume computation is one of the critical steps in grading pathologies and evaluating the response to a treatment. We propose a deep-learning-based visualization tool to calculate the fluid volume in retinal optical coherence tomography (OCT) images. The pathologies under consideration are Intraretinal Fluid (IRF), Subretinal Fluid (SRF), and Pigmented Epithelial Detachment (PED). We develop a binary classification model for each of these pathologies using the Inception-ResNet-v2 and the small Inception-ResNet-v2 models. For visualization, we use several standard Class Activation Mapping (CAM) techniques, namely Grad-CAM, Grad-CAM++, Score-CAM, Ablation-CAM, and Self-Matching CAM, to visualize the pathology-specific regions in the image and develop a novel Ensemble-CAM visualization technique for robust visualization of OCT images. In addition, we demonstrate a Graphical User Interface that takes the visualization heat maps as the input and calculates the fluid volume in the OCT C-scans. The volume is computed using both the region-growing algorithm and selective thresholding technique and compared with the ground-truth volume based on expert annotation. We compare the results obtained using the standard Inception-ResNet-v2 model with a small Inception-ResNet-v2 model, which has half the number of trainable parameters compared with the original model. This study shows the relevance and usefulness of deep-learning-based visualization techniques for reliable volumetric analysis. Full article
(This article belongs to the Section Pathology and Molecular Diagnostics)
Show Figures

Figure 1

Figure 1
<p>Various pathologies of interest that are observed in OCT images: IRF: Intraretinal Fluid; SRF: Subretinal Fluid; and PED: Pigmented Epithelial Detachment.</p>
Full article ">Figure 2
<p>Block diagram of (<b>a</b>) Inception-ResNet-v2 architecture; (<b>b</b>) Small Inception-ResNet-v2 architecture; and (<b>c</b>) Stem.</p>
Full article ">Figure 3
<p>(<b>a</b>) Noisy OCT image and (<b>b</b>) Denoised OCT image using GCDS.</p>
Full article ">Figure 4
<p>The pipeline for the training classification model: Path-1 is the pipeline for the Inception-ResNet-v2 GCDS model and small Inception-ResNet-v2 GCDS model, whereas Path-2 is the pipeline for the noisy model.</p>
Full article ">Figure 5
<p>(<b>a</b>) GCDS denoised OCT image and (<b>b</b>) GradCAM output.</p>
Full article ">Figure 6
<p>This figure shows the outputs from the Inception-ResNet-v2 GCDS model: (<b>a</b>) OCT image; (<b>b</b>) ground truth; (<b>c</b>) Grad-CAM binary map; (<b>d</b>) Grad-CAM++ binary map; (<b>e</b>) Score-CAM binary map; (<b>f</b>) Ablation-CAM binary map; (<b>g</b>) Self-Matching-CAM binary map; and (<b>h</b>) Ensemble-CAM binary map.</p>
Full article ">Figure 7
<p>This figure shows the pipeline for volume computation: Path-1 is the pipeline for the Inception-ResNet-v2 GCDS model and small Inception-ResNet-v2 GCDS model, while Path-2 is the pipeline for the noisy model.</p>
Full article ">Figure 8
<p>(<b>a</b>) Visualization output and (<b>b</b>) Graphical User Interface displaying the computed volume.</p>
Full article ">Figure 9
<p>(<b>a</b>) OCT image; (<b>b</b>) binary mask obtained from visualization; (<b>c</b>) region of interest; (<b>d</b>) histogram of the region of interest; (<b>e</b>) predicted region of interest from the selective thresholding technique; and (<b>f</b>) ground-truth of the region of interest.</p>
Full article ">Figure 10
<p>This figure shows the outputs from the Inception-ResNet-v2 GCDS model: (<b>a</b>) OCT image; (<b>b</b>) Grad-CAM heat map of OCT image for IRF; (<b>c</b>) binary map of Grad-CAM heat map; and (<b>d</b>) corresponding ground truth.</p>
Full article ">Figure 11
<p>This figure shows the outputs from the Inception-ResNet-v2 GCDS model: (<b>a</b>) OCT image; (<b>b</b>) Grad-CAM heat map of OCT image for SRF; (<b>c</b>) binary map of Grad-CAM heat map; and (<b>d</b>) corresponding ground truth..</p>
Full article ">Figure 12
<p>This figure shows the outputs from Inception-ResNet-v2 GCDS model: (<b>a</b>) OCT image; (<b>b</b>) Grad-CAM heat map of OCT image for PED; (<b>c</b>) binary map of Grad-CAM heat map; and (<b>d</b>) corresponding ground truth.</p>
Full article ">
19 pages, 7996 KiB  
Article
An Explainable Vision Transformer Model Based White Blood Cells Classification and Localization
by Oguzhan Katar and Ozal Yildirim
Diagnostics 2023, 13(14), 2459; https://doi.org/10.3390/diagnostics13142459 - 24 Jul 2023
Cited by 8 | Viewed by 6118
Abstract
White blood cells (WBCs) are crucial components of the immune system that play a vital role in defending the body against infections and diseases. The identification of WBCs subtypes is useful in the detection of various diseases, such as infections, leukemia, and other [...] Read more.
White blood cells (WBCs) are crucial components of the immune system that play a vital role in defending the body against infections and diseases. The identification of WBCs subtypes is useful in the detection of various diseases, such as infections, leukemia, and other hematological malignancies. The manual screening of blood films is time-consuming and subjective, leading to inconsistencies and errors. Convolutional neural networks (CNN)-based models can automate such classification processes, but are incapable of capturing long-range dependencies and global context. This paper proposes an explainable Vision Transformer (ViT) model for automatic WBCs detection from blood films. The proposed model uses a self-attention mechanism to extract features from input images. Our proposed model was trained and validated on a public dataset of 16,633 samples containing five different types of WBCs. As a result of experiments on the classification of five different types of WBCs, our model achieved an accuracy of 99.40%. Moreover, the model’s examination of misclassified test samples revealed a correlation between incorrect predictions and the presence or absence of granules in the cell samples. To validate this observation, we divided the dataset into two classes, Granulocytes and Agranulocytes, and conducted a secondary training process. The resulting ViT model, trained for binary classification, achieved impressive performance metrics during the test phase, including an accuracy of 99.70%, recall of 99.54%, precision of 99.32%, and F-1 score of 99.43%. To ensure the reliability of the ViT model’s, we employed the Score-CAM algorithm to visualize the pixel areas on which the model focuses during its predictions. Our proposed method is suitable for clinical use due to its explainable structure as well as its superior performance compared to similar studies in the literature. The classification and localization of WBCs with this model can facilitate the detection and reporting process for the pathologist. Full article
(This article belongs to the Special Issue Classifications of Diseases Using Machine Learning Algorithms)
Show Figures

Figure 1

Figure 1
<p>Block representation of the proposed WBC classification and localization method.</p>
Full article ">Figure 2
<p>An illustration of the dataset with number of classes and some class images.</p>
Full article ">Figure 3
<p>The block diagram depicts the multi-head self-attention (MHA) mechanism.</p>
Full article ">Figure 4
<p>Confusion matrices for (<b>a</b>) binary-class and (<b>b</b>) multi-class scenarios.</p>
Full article ">Figure 5
<p>The ViT model performance curves during training. (<b>a</b>) Accuracy values and (<b>b</b>) Loss values.</p>
Full article ">Figure 6
<p>The confusion matrix and some graphs of metrics for multi-class for test dataset.</p>
Full article ">Figure 7
<p>A schematic representation for types of white blood cells.</p>
Full article ">Figure 8
<p>The ViT model performance on Granulocytes and Agranulocytes classification.</p>
Full article ">Figure 9
<p>The operational framework of the explainable ViT model.</p>
Full article ">Figure 10
<p>The areas upon which the model correctly focuses its predictions on the test images.</p>
Full article ">Figure 11
<p>Class softmax distribution on true predicted samples (<b>a</b>) 5-Class softmax distribution and (<b>b</b>) 2-Class softmax distribution.</p>
Full article ">Figure 12
<p>Some examples of misclassified samples and class probabilities during test phase.</p>
Full article ">
17 pages, 3617 KiB  
Article
Saliency Map and Deep Learning in Binary Classification of Brain Tumours
by Wojciech Chmiel, Joanna Kwiecień and Kacper Motyka
Sensors 2023, 23(9), 4543; https://doi.org/10.3390/s23094543 - 7 May 2023
Cited by 2 | Viewed by 3174
Abstract
The paper was devoted to the application of saliency analysis methods in the performance analysis of deep neural networks used for the binary classification of brain tumours. We have presented the basic issues related to deep learning techniques. A significant challenge in using [...] Read more.
The paper was devoted to the application of saliency analysis methods in the performance analysis of deep neural networks used for the binary classification of brain tumours. We have presented the basic issues related to deep learning techniques. A significant challenge in using deep learning methods is the ability to explain the decision-making process of the network. To ensure accurate results, the deep network being used must undergo extensive training to produce high-quality predictions. There are various network architectures that differ in their properties and number of parameters. Consequently, an intriguing question is how these different networks arrive at similar or distinct decisions based on the same set of prerequisites. Therefore, three widely used deep convolutional networks have been discussed, such as VGG16, ResNet50 and EfficientNetB7, which were used as backbone models. We have customized the output layer of these pre-trained models with a softmax layer. In addition, an additional network has been described that was used to assess the saliency areas obtained. For each of the above networks, many tests have been performed using key metrics, including statistical evaluation of the impact of class activation mapping (CAM) and gradient-weighted class activation mapping (Grad-CAM) on network performance on a publicly available dataset of brain tumour X-ray images. Full article
(This article belongs to the Special Issue Machine and Deep Learning in Sensing and Imaging)
Show Figures

Figure 1

Figure 1
<p>Evaluation framework of saliency methods.</p>
Full article ">Figure 2
<p>Brain image examples.</p>
Full article ">Figure 3
<p>Scheme of the architectures used in the research.</p>
Full article ">Figure 4
<p>Confusion matrices for considered networks.</p>
Full article ">Figure 5
<p>Training and validation accuracy of the proposed models.</p>
Full article ">Figure 6
<p>Binary comparison of IoU for all combinations of trained networks for the CAM method.</p>
Full article ">Figure 7
<p>Matrix showing the average difference of Cartesian distance for four neural network architectures (the CAM method); (<b>a</b>) case: all test images, (<b>b</b>) case: all test images containing the brain tumour.</p>
Full article ">Figure 8
<p>Binary comparison of IoU for all combinations of trained networks for the Grad-CAM method.</p>
Full article ">Figure 9
<p>Matrix showing the average difference of Cartesian distance for four neural network architectures (Grad-CAM method); (<b>a</b>) case: all test images, regardless of whether they contain brain tumour (<b>b</b>) case: only test images that contain brain tumour.</p>
Full article ">Figure 10
<p>Comparison of CAM and Grad-CAM methods.</p>
Full article ">Figure 11
<p>Examples of saliency maps obtained by CAM and Grad-CAM.</p>
Full article ">
13 pages, 279 KiB  
Article
Factors Associated with the Use of Complementary and Alternative Medicine/Therapy among United States Adults with Asthma
by Chukwuemeka E. Ogbu, Chisa O. Oparanma and Russell S. Kirby
Healthcare 2023, 11(7), 983; https://doi.org/10.3390/healthcare11070983 - 30 Mar 2023
Cited by 3 | Viewed by 2951
Abstract
This article examined the sociodemographic and health-related factors associated with the use of complementary and alternative medicine/therapy (CAM) among adults with current asthma in the United States. We used data from 76,802 adults aged 18 years and above from the 2012–2019 Behavioral Risk [...] Read more.
This article examined the sociodemographic and health-related factors associated with the use of complementary and alternative medicine/therapy (CAM) among adults with current asthma in the United States. We used data from 76,802 adults aged 18 years and above from the 2012–2019 Behavioral Risk Factor Surveillance System (BRFSS) Asthma Call-back Survey (ACBS) cycles. Weighted binary and multinomial logistic regression was used to examine the association of these factors with ever CAM use and the number of CAM use. We found that approximately 45.2% of US adults with asthma ever used CAM. Among adults with asthma, 25.3% and 19.9% endorsed using one CAM and ≥2 CAMs, respectively. CAM use was significantly associated with adults ≤ 35 years, female gender, multiple/other race/ethnicity, higher cost barriers, adults with two or more disease comorbidities, and those with poorly controlled asthma in both binary and multinomial models. CAM use was not associated with insurance and income status. Understanding factors associated with CAM use can provide asthma care professionals valuable insights into the underlying drivers of CAM use behavior in this population, enabling them to offer more informed and effective medical advice and guidance. Full article
14 pages, 2594 KiB  
Article
Solid–Liquid Equilibrium in Co-Amorphous Systems: Experiment and Prediction
by Alžběta Zemánková, Fatima Hassouna, Martin Klajmon and Michal Fulem
Molecules 2023, 28(6), 2492; https://doi.org/10.3390/molecules28062492 - 8 Mar 2023
Cited by 3 | Viewed by 2281
Abstract
In this work, the solid–liquid equilibrium (SLE) of four binary systems combining two active pharmaceutical ingredients (APIs) capable of forming co-amorphous systems (CAMs) was investigated. The binary systems studied were naproxen-indomethacin, naproxen-ibuprofen, naproxen-probucol, and indomethacin-paracetamol. The SLE was experimentally determined by differential scanning [...] Read more.
In this work, the solid–liquid equilibrium (SLE) of four binary systems combining two active pharmaceutical ingredients (APIs) capable of forming co-amorphous systems (CAMs) was investigated. The binary systems studied were naproxen-indomethacin, naproxen-ibuprofen, naproxen-probucol, and indomethacin-paracetamol. The SLE was experimentally determined by differential scanning calorimetry. The thermograms obtained revealed that all binary mixtures investigated form eutectic systems. Melting of the initial binary crystalline mixtures and subsequent quenching lead to the formation of CAM for all binary systems and most of the compositions studied. The experimentally obtained liquidus and eutectic temperatures were compared to theoretical predictions using the perturbed-chain statistical associating fluid theory (PC-SAFT) equation of state and conductor-like screening model for real solvents (COSMO-RS), as implemented in the Amsterdam Modeling Suite (COSMO-RS-AMS). On the basis of the obtained results, the ability of these models to predict the phase diagrams for the investigated API–API binary systems was evaluated. Furthermore, the glass transition temperature (Tg) of naproxen (NAP), a compound with a high tendency to recrystallize, whose literature values are considerably scattered, was newly determined by measuring and modeling the Tg values of binary mixtures in which amorphous NAP was stabilized. Based on this analysis, erroneous literature values were identified. Full article
(This article belongs to the Special Issue Exclusive Feature Papers in Physical Chemistry)
Show Figures

Figure 1

Figure 1
<p>Examples of DSC thermograms. (<b>a</b>) Thermograms recorded for initial crystalline mixtures at a heating rate of 2 °C min<sup>−1</sup>. Arrows indicate eutectic and liquidus peaks. (<b>b</b>) Thermograms obtained after melting crystalline mixtures, their subsequent quenching, and heating by 10 °C min<sup>−1</sup>. Arrows indicate glass transition temperatures.</p>
Full article ">Figure 2
<p>Phase diagrams for (<b>a</b>) NAP-IND, (<b>b</b>) NAP-IBU, (<b>c</b>) NAP-PRO, and (<b>d</b>) IND-PAR. Black squares: experimental liquidus temperatures <span class="html-italic">T</span><sub>L</sub>; black circles: experimental eutectic temperatures <span class="html-italic">T</span><sub>E</sub>; solid blue line: <span class="html-italic">T</span><sub>L</sub> predicted by PC-SAFT EOS (<span class="html-italic">k</span><sub>ij</sub> = 0); solid green line: <span class="html-italic">T</span><sub>L</sub> predicted by COSMO-RS-AMS; dashed red line: ideal solubility; black solid line: mean value of <span class="html-italic">T</span><sub>E</sub>. The PC-SAFT calculations involving PRO are based on approximative parametrization (see <a href="#sec3dot3dot2-molecules-28-02492" class="html-sec">Section 3.3.2</a>).</p>
Full article ">Figure 3
<p><span class="html-italic">T</span><sub>g</sub> values for mixtures NAP-IND, NAP-IBU, and NAP-PRO measured by DSC (represented by squares) fitted with the Kwei equation, Equation (8) (represented by lines), as a function of molar fraction of NAP, <span class="html-italic">x</span><sub>NAP</sub>. The fit by the Gordon–Taylor equation, Equation (7), is not shown as it cannot be distinguished from that by the Kwei equation.</p>
Full article ">
7 pages, 970 KiB  
Brief Report
Artificial Intelligence Based Analysis of Corneal Confocal Microscopy Images for Diagnosing Peripheral Neuropathy: A Binary Classification Model
by Yanda Meng, Frank George Preston, Maryam Ferdousi, Shazli Azmi, Ioannis Nikolaos Petropoulos, Stephen Kaye, Rayaz Ahmed Malik, Uazman Alam and Yalin Zheng
J. Clin. Med. 2023, 12(4), 1284; https://doi.org/10.3390/jcm12041284 - 6 Feb 2023
Cited by 15 | Viewed by 2176
Abstract
Diabetic peripheral neuropathy (DPN) is the leading cause of neuropathy worldwide resulting in excess morbidity and mortality. We aimed to develop an artificial intelligence deep learning algorithm to classify the presence or absence of peripheral neuropathy (PN) in participants with diabetes or pre-diabetes [...] Read more.
Diabetic peripheral neuropathy (DPN) is the leading cause of neuropathy worldwide resulting in excess morbidity and mortality. We aimed to develop an artificial intelligence deep learning algorithm to classify the presence or absence of peripheral neuropathy (PN) in participants with diabetes or pre-diabetes using corneal confocal microscopy (CCM) images of the sub-basal nerve plexus. A modified ResNet-50 model was trained to perform the binary classification of PN (PN+) versus no PN (PN−) based on the Toronto consensus criteria. A dataset of 279 participants (149 PN−, 130 PN+) was used to train (n = 200), validate (n = 18), and test (n = 61) the algorithm, utilizing one image per participant. The dataset consisted of participants with type 1 diabetes (n = 88), type 2 diabetes (n = 141), and pre-diabetes (n = 50). The algorithm was evaluated using diagnostic performance metrics and attribution-based methods (gradient-weighted class activation mapping (Grad-CAM) and Guided Grad-CAM). In detecting PN+, the AI-based DLA achieved a sensitivity of 0.91 (95%CI: 0.79–1.0), a specificity of 0.93 (95%CI: 0.83–1.0), and an area under the curve (AUC) of 0.95 (95%CI: 0.83–0.99). Our deep learning algorithm demonstrates excellent results for the diagnosis of PN using CCM. A large-scale prospective real-world study is required to validate its diagnostic efficacy prior to implementation in screening and diagnostic programmes. Full article
(This article belongs to the Section Clinical Neurology)
Show Figures

Figure 1

Figure 1
<p>ROC (receiver operating characteristic) curve of PN+. The black line corresponds to the ROC curve and the blue area corresponds to the 95% confidence interval. List of abbreviations: PN+—peripheral neuropathy.</p>
Full article ">Figure 2
<p>Example saliency map images from correctly detected PN− (columns 1,2,3) and PN+ (columns 4,5,6). Highlighted areas within the Grad-CAM and Guided Grad-CAM images demonstrate the areas in the image which impacted the classification decision most. For Grad-CAM images, the areas of the image highlighted in red had the most impact on the classification decision, followed by orange, yellow, green, light blue, and dark blue. Top row, original images; middle row, Grad-CAM images; bottom row, Guided Grad-CAM images. List of abbreviations: PN+—peripheral neuropathy; PN−—no peripheral neuropathy.</p>
Full article ">
18 pages, 3929 KiB  
Review
Ferroelectric Devices for Content-Addressable Memory
by Mikhail Tarkov, Fedor Tikhonenko, Vladimir Popov, Valentin Antonov, Andrey Miakonkikh and Konstantin Rudenko
Nanomaterials 2022, 12(24), 4488; https://doi.org/10.3390/nano12244488 - 19 Dec 2022
Cited by 6 | Viewed by 3925
Abstract
In-memory computing is an attractive solution for reducing power consumption and memory access latency cost by performing certain computations directly in memory without reading operands and sending them to arithmetic logic units. Content-addressable memory (CAM) is an ideal way to smooth out the [...] Read more.
In-memory computing is an attractive solution for reducing power consumption and memory access latency cost by performing certain computations directly in memory without reading operands and sending them to arithmetic logic units. Content-addressable memory (CAM) is an ideal way to smooth out the distinction between storage and processing, since each memory cell is a processing unit. CAM compares the search input with a table of stored data and returns the matched data address. The issues of constructing binary and ternary content-addressable memory (CAM and TCAM) based on ferroelectric devices are considered. A review of ferroelectric materials and devices is carried out, including on ferroelectric transistors (FeFET), ferroelectric tunnel diodes (FTJ), and ferroelectric memristors. Full article
(This article belongs to the Special Issue Redox-Based Resistive Nanomemristor for Neuromorphic Computing)
Show Figures

Graphical abstract

Graphical abstract
Full article ">Figure 1
<p>CAM block diagram [<a href="#B18-nanomaterials-12-04488" class="html-bibr">18</a>].</p>
Full article ">Figure 2
<p>TCAM cell [<a href="#B24-nanomaterials-12-04488" class="html-bibr">24</a>].</p>
Full article ">Figure 3
<p>(<b>a</b>) FeRAM, (<b>b</b>) FeFET, and (<b>c</b>) FTJ [<a href="#B34-nanomaterials-12-04488" class="html-bibr">34</a>].</p>
Full article ">Figure 4
<p>Classic ferroelectric materials: Pb(Zr,Ti)O<sub>3</sub>, SrBi<sub>2</sub>Ta<sub>2</sub>O<sub>9</sub>, and BiFeO<sub>3</sub> [<a href="#B34-nanomaterials-12-04488" class="html-bibr">34</a>].</p>
Full article ">Figure 5
<p>FeFET design change to improve field uniformity in ferroelectric along the channel. Copyright © 2020, IEEE [<a href="#B99-nanomaterials-12-04488" class="html-bibr">99</a>].</p>
Full article ">Figure 6
<p>Various ferroelectric devices with a three-terminal structure (<b>upper</b> row) and corresponding characteristics of synaptic plasticity (<b>lower</b> row). (<b>A</b>) MFIS, (<b>B</b>) (<b>left</b>) MFMIS and (<b>right</b>) MFMIS with the MFM capacitor integrated into the BEOL and the underlying MOSFET integrated into the FEOL, and (<b>C</b>) FeTFT. Schematic of FeFETs with different channel geometries and synaptic potentiation (<b>a</b>) or depression (<b>b</b>) [<a href="#B73-nanomaterials-12-04488" class="html-bibr">73</a>].</p>
Full article ">Figure 7
<p>Energy band landscapes for different structures with different ferroelectric polarization states within the structure. (<b>A</b>) Metal/ferroelectric/metal (M/F/M), (<b>B</b>) metal/ferroelectric/interlayer/metal (M/F/IL/M), and (<b>C</b>) metal/ferroelectric/IL/semiconductor (M/F/IL/S) structures. The band structures are drawn using band diagram program with schematic Thomas–Fermi screening length of metals [<a href="#B113-nanomaterials-12-04488" class="html-bibr">113</a>,<a href="#B114-nanomaterials-12-04488" class="html-bibr">114</a>]. The interlayers of M/F/IL/M and M/F/IL/S are set as Al<sub>2</sub>O<sub>3</sub> and SiO<sub>2</sub>, respectively. (<b>D</b>) Conductance ratio of FTJs, having a TiN/Hf<sub>0.5</sub>Zr<sub>0.5</sub>O<sub>2</sub> (HZO)/ZrO<sub>2</sub>/TiN (M/F/IL/M) and TiN/HZO/ZrO<sub>2</sub>/poly-Si (M/F/IL/S) structure as a function of the pulse amplitude. (<b>E</b>) Retention characteristics of M/F/IL/M and M/F/IL/S FTJs; both devices were measured after wake-up field cycling of 10<sup>6</sup> cycles with ±6 V/500 ns and 10 V/500 ns for M/F/IL/M and M/F/IL/S, respectively. (<b>F</b>) Relative permittivity of a M/F/M capacitor as a function of the field cycles. The inset shows the device schematic (<b>right</b>) and a hysteresis loop of the relative permittivity (<b>left</b>). The relative permittivity was extracted from the small-signal capacitance measured at 0 V with a bias amplitude of 30 mV. The program/erase pulse amplitude used were 3 V/−3 V. (<b>D</b>,<b>E</b>) Reproduced with permission [<a href="#B104-nanomaterials-12-04488" class="html-bibr">104</a>]. Copyright 2021, IEEE. (<b>F</b>) Reproduced with permission [<a href="#B115-nanomaterials-12-04488" class="html-bibr">115</a>]. Copyright © 2021, IEEE [<a href="#B73-nanomaterials-12-04488" class="html-bibr">73</a>].</p>
Full article ">Figure 8
<p>(<b>a</b>–<b>c</b>) An example of logarithmic R<sub>OFF</sub>/R<sub>ON</sub> and polarization, respectively, as a function of switching voltage Copyright © 2012, 2019 American Chemical Society [<a href="#B122-nanomaterials-12-04488" class="html-bibr">122</a>,<a href="#B123-nanomaterials-12-04488" class="html-bibr">123</a>,<a href="#B124-nanomaterials-12-04488" class="html-bibr">124</a>].</p>
Full article ">Figure 9
<p>TCAM cell designs based on: (<b>a</b>) CMOS static random access memory required 16 transistors, BL, bit line, ML, pre-charged matchline, SL, search line, WL, write line, (<b>b</b>) based on resistive storage elements, (<b>c</b>) based on four CMOS FET and two FeFETs, (<b>d</b>) based on two FeFETs [<a href="https://arxiv.org/abs/2101.06375" target="_blank">https://arxiv.org/abs/2101.06375</a> Access Date: 10 December 2022].</p>
Full article ">
Back to TopTop