[go: up one dir, main page]
More Web Proxy on the site http://driver.im/
Next Article in Journal
Dense Pedestrian Detection Based on GR-YOLO
Next Article in Special Issue
Bedside Magnetocardiography with a Scalar Sensor Array
Previous Article in Journal
Structural Optimization and Performance of a Low-Frequency Double-Shell Type-IV Flexural Hydroacoustic Transducer
Previous Article in Special Issue
The Importance of Preconditioning for the Sonographic Assessment of Plantar Fascia Thickness and Shear Wave Velocity
You seem to have javascript disabled. Please note that many of the page functionalities won't work as expected without javascript enabled.
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

DeMambaNet: Deformable Convolution and Mamba Integration Network for High-Precision Segmentation of Ambiguously Defined Dental Radicular Boundaries

1
School of Automation, Hangzhou Dianzi University, Hangzhou 310018, China
2
The Second Affiliated Hospital, School of Medicine, Zhejiang University, Hangzhou 310027, China
3
Lishui Institute, Hangzhou Dianzi University, Lishui 323000, China
*
Authors to whom correspondence should be addressed.
These authors contributed equally to this work.
Sensors 2024, 24(14), 4748; https://doi.org/10.3390/s24144748
Submission received: 30 May 2024 / Revised: 17 July 2024 / Accepted: 19 July 2024 / Published: 22 July 2024
(This article belongs to the Special Issue Biomedical Imaging, Sensing and Signal Processing)
Figure 1
<p>Schematic representation of the Deformable Convolution and Mamba Integration Network (DeMambaNet), integrating a Coalescent Structural Deformable Encoder, a Cognitively-Optimized Semantic Enhance Module, and a Hierarchical Convergence Decoder.</p> ">
Figure 2
<p>Schematic representation of the CSDE, which integrates a State Space Pathway, based on SSM, in the upper section, and an Adaptive Deformable Pathway, based on DCN, in the lower section.</p> ">
Figure 3
<p>The schematic depiction of each hierarchical stage, composed of DCNv3, LN, and MLP, utilizes DCNv3 as its core operator for efficient feature extraction.</p> ">
Figure 4
<p>The schematic depiction of the TSMamba block involves GSC, ToM, LN, and MLP, collectively enhancing input feature processing and representation.</p> ">
Figure 5
<p>The schematic depiction of the SEM, which combines encoder outputs through concatenation, applies Conv, BN, and ReLU and then enhances features with the MLP and LVC. The MLP captures global dependencies, while the LVC focuses on local details.</p> ">
Figure 6
<p>The schematic illustrates the HCD, incorporating a multi-layered decoder structure. Each tier combines convolutional and deconvolutional layers for feature enhancement and upsampling, and it is equipped with the TAFI designed specifically for feature fusion.</p> ">
Figure 7
<p>Schematic representation of the TAFI, which combines features from the encoder’s two pathways and uses local and global attention modules to emphasize important information.</p> ">
Figure 8
<p>Box plot showcasing the evaluation metrics index from training results. On the x-axis, the models are labeled as follows: (a) ENet; (b) ICNet; (c) LEDNet; (d) OCNet; (e) PSPNet; (f) SegNet; (g) VM-UNet; (h) Attention U-Net; (i) R2U-Net; (j) UNet; (k) UNet++; (l) TransUNet; (m) Dense-UNet; (n) Mamba-UNet; (o) DeMambaNet (ours).</p> ">
Figure 9
<p>A few segmentation results of comparison between our proposed method and the existing state-of-the-art models. The segmentation result of the teeth is shown in green. The red dashed line represents the ground truth.</p> ">
Figure 10
<p>Box plot showcasing the evaluation metrics index of ablation experiments. On the x-axis, the models are labeled as follows: (a) w/o SSP; (b) w/o ADP; (c) w/o TAFI; (d) w/o SEM; (e) DeMambaNet (ours).</p> ">
Versions Notes

Abstract

:
The incorporation of automatic segmentation methodologies into dental X-ray images refined the paradigms of clinical diagnostics and therapeutic planning by facilitating meticulous, pixel-level articulation of both dental structures and proximate tissues. This underpins the pillars of early pathological detection and meticulous disease progression monitoring. Nonetheless, conventional segmentation frameworks often encounter significant setbacks attributable to the intrinsic limitations of X-ray imaging, including compromised image fidelity, obscured delineation of structural boundaries, and the intricate anatomical structures of dental constituents such as pulp, enamel, and dentin. To surmount these impediments, we propose the Deformable Convolution and Mamba Integration Network, an innovative 2D dental X-ray image segmentation architecture, which amalgamates a Coalescent Structural Deformable Encoder, a Cognitively-Optimized Semantic Enhance Module, and a Hierarchical Convergence Decoder. Collectively, these components bolster the management of multi-scale global features, fortify the stability of feature representation, and refine the amalgamation of feature vectors. A comparative assessment against 14 baselines underscores its efficacy, registering a 0.95% enhancement in the Dice Coefficient and a diminution of the 95th percentile Hausdorff Distance to 7.494.

1. Introduction

Dental diseases, encompassing periodontal afflictions and caries, are not only confined to oral complications but also implicate broader systemic health ramifications. Numerous studies have corroborated the significant correlation between such dental conditions and elevated risks of cardiovascular diseases, including coronary artery disease, myocardial infarction, and cerebrovascular incidents like strokes [1,2]. Moreover, these conditions are associated with an increased likelihood of ischemic and hemorrhagic strokes as well as cerebral ischemia. The precision of tooth segmentation is crucial in guiding clinical diagnostics and surgical planning. During orthodontic treatments, dentists must monitor tooth movement and root resorption to assess the health status of teeth and correct malocclusions, thus reducing treatment durations [3]. Accurate segmentation from panoramic dental X-ray images is fundamental in this process. The clinical significance of tooth segmentation extends to early diagnostics, enabling the monitoring of progressive dental conditions and assisting in treatment planning. It facilitates the detection of caries, periodontal disease, and developmental anomalies [4]. High-precision, quantitative segmentation methods are thus an essential clinical requirement for preventing and diagnosing dental conditions.
Advancements in automated medical image diagnosis leveraging neural networks for extensive medical image data analysis continue to evolve [5,6]. However, applying these methods to panoramic dental X-ray image segmentation presents numerous challenges, primarily due to the fundamental principles of X-ray imaging, the anatomical and biophysical characteristics of human teeth, and the imaging process itself [7]. X-ray imaging captures images by exploiting the absorption differences when X-rays penetrate materials of varying densities and atomic numbers, creating a two-dimensional representation of three-dimensional tooth structures. The similarity in density and composition of teeth (including dentin and enamel) and adjacent structures (such as gums and alveolar bones), especially when tooth roots meet the jawbone, results in ambiguous boundaries in these regions. Anatomically, the high variability in individual tooth anatomy and the complexity of the root canal system further complicate image processing. From a biophysical perspective, the similarities in X-ray density between teeth and alveolar bones lead to blurred boundaries due to X-rays’ high penetrative and scattering effects. In addition, the projective nature of X-ray images projects three-dimensional structures into a two-dimensional plane, leading to overlaps and intersections that are particularly challenging to discern, especially in multi-rooted, curved teeth. As X-rays penetrate dense tooth and bone regions, scattering effects can also increase background noise and decrease image contrast, further complicating boundary definition. Various artifacts such as scattering, halos, and obstructions may occur during the imaging process, along with imaging equipment limitations and patient movement, which can introduce image noise, thereby diminishing the overall image quality and clarity of dental and surrounding structures.
To augment the fidelity of boundary delineation in panoramic dental X-ray imagery, it is crucial to meticulously account for the inherent constraints of X-ray imaging alongside the anatomical and physiological idiosyncrasies of dental structures and their distinct biophysical attributes. Enhancement of image processing algorithms, especially in regions proximal to tooth roots, necessitates the deployment of models that can a priori emulate dental configurations, concentrating on pivotal areas while conforming to the complex geometries of teeth and their roots. In response to these exigencies, we propose an innovative architecture, termed Deformable Convolution and Mamba Integration Network (DeMambaNet), engineered to address the challenges of segmenting ambiguously defined dental radicular boundaries in panoramic dental X-ray images. This architecture is fortified with three novel components: the Coalescent Structural Deformable Encoder (CSDE), the Cognitively Optimized Semantic Enhancement Module (SEM), and the Hierarchical Convergence Decoder (HCD). The CSDE amalgamates the Deformable Convolution Network (DCN) with the State Space Model (SSM) to harvest multi-scale features and manage spatial dependencies over extended distances. Concurrently, the SEM refines feature representation through sophisticated fusion and encoding strategies, whereas the HCD seamlessly amalgamates features across multiple dimensions, facilitating meticulous detail enhancement from macroscopic to microscopic scales.
The principal contributions of this study are outlined as follows:
  • Proposd DeMambaNet for panoramic dental X-ray segmentation, incorporating a dual-pathway encoder capable of multilevel feature extraction, to address challenges such as the density concordance between dental and osseous tissues, intricate root geometries, and dental overlaps evident. The source code is available on GitHub (https://github.com/IMOP-lab/DeMambaNet) to catalyze expansive research and clinical adoption.
  • Proposed HCD for stratified feature fusion, orchestrate and equilibrate local and global information. Maintaining diversity in feature representation through the Triplet Attentional Feature Integration (TAFI) module across various decoding phases.
  • Implementation of Deformable Convolution and State Space Models to enhance proficiency in managing the compression-induced overlaps and intersections of three-dimensional dental structures into two-dimensional representations through dynamic adaptability of DCN and the spatial resolution capabilities of SSM.
Section 2 discusses the related research, Section 3 elaborates on the methods used, and Section 4 analyzes the results of the proposed approaches. Section 5 describes the discussion, and Section 6 describes the conclusions and future work.

2. Related Works

2.1. Traditional Computational Approaches in Dental X-ray Segmentation

Segmentation methodologies for dental X-ray imagery have traditionally employed diverse computational strategies to augment diagnostic accuracy. Initially, systems such as fuzzy inference, Bayesian classifiers, and Support Vector Machines were predominant in dental imaging segmentation tasks [8]. These methodologies often necessitated manual expert input to generate precise rule sets, which presented substantial barriers to scalability and utility in routine clinical applications.

2.2. Advancements in CNN-Based Dental Image Segmentation

Recent advancements have seen a paradigm shift towards deep learning technologies, particularly by integrating Convolutional Neural Networks (CNNs) for the semantic segmentation of panoramic dental X-ray images [9,10,11]. This transition facilitates enhanced feature extraction capabilities and superior classification accuracy of dental anomalies. Additionally, Buhari et al. [12] have amalgamated fuzzy C-means clustering and level set methods with sophisticated frameworks such as Faster R-CNN and YOLO V5 to tackle the complexities of dental image segmentation and caries detection, thereby illustrating the potent capabilities of deep learning in managing intricate image structures.
A notable advancement within this realm is the adoption of U-Net-based architectures, significantly improving medical image segmentation through refined skip connections and deep supervision techniques. These architectures efficiently harness multi-scale features and integrate sophisticated attention mechanisms [13,14], thus demonstrating substantial promise in dental image segmentation tasks [15]. However, these methods are inherently susceptible to variations in image quality, which may influence diagnostic results, particularly in scenarios involving overlapping dental structures.

2.3. Exploration of State Space Models in Image Segmentation

Concurrently with CNN advancements, innovative methodologies like the SegMamba [16] approach have adopted state space models for spatial feature analysis, incorporating tri-oriented spatial Mamba blocks for comprehensive and multi-scale feature representation. This strategy has demonstrated considerable efficacy in precise segmentation prediction by amalgamating multi-scale global insights through a convolutional 3D decoder and by employing skip connections for enhanced feature preservation.

2.4. Advancements in Image Segmentation with Deformable Convolutions

The recent InternImage [17] backbone has revolutionized the integration of deformable convolution networks, offering substantial improvements in handling image segmentation and object detection tasks. This model sets a new standard by achieving remarkable performance metrics on challenging datasets like ImageNet and COCO, thus bridging the gap between traditional CNNs and Vision Transformers.

2.5. Feature Enhancement and Fusion Techniques for Improved Segmentation

Efficient feature enhancement and fusion methods such as Efficient Vision Center (EVC) [18] and Attentional Feature Fusion (AFF) [19] are widely applied to the computer vision field. The EVC utilizes a dual approach: a lightweight Multi-Layer Perceptron (MLP) captures global long-range dependencies while a vision centre mechanism preserves local details. These are concatenated to form an enriched feature map that effectively balances global and local information. In feature fusion, the AFF method furthers the integration process by employing a multi-scale channel attention mechanism. This mechanism refines the feature maps by selectively weighting features through learned attention weights, thus optimizing the feature integration across different scales.
These methodologies collectively underscore a transition from traditional rule-based systems to more sophisticated, data-driven approaches in dental image segmentation, highlighting the importance of advanced machine learning techniques in improving diagnostic accuracy and operational efficiency in dental care.

3. Methods

The primary challenges arise from the intricacies of radiographic imaging techniques, the complex anatomical structure of teeth and their biophysical properties. Firstly, X-ray imaging relies on the differential absorption rates of X-rays by various materials, but the density and atomic numbers of dentin, enamel, surrounding alveolar bone, and gums are closely matched. This similarity is particularly pronounced at the interfaces between tooth roots and the jawbone, leading to reduced image contrast and complicating the boundary detection. From an image processing perspective, the projective nature of X-ray images means that the three-dimensional structure of teeth appears overlapped and intertwined in two dimensions, posing significant challenges, especially with curved multi-rooted teeth. Additionally, the scattering of X-rays in areas of high density introduces noise, degrading image quality. These factors collectively hinder traditional image processing techniques from accurately delineating different dental structures.
To address these challenges, as depicted in Figure 1, we propose the “Deformable Convolution and Mamba Integration Network” model, a novel 2D dental X-ray image segmentation framework that leverages a dual-pathway encoder structure incorporating DCN and SSM. This model features a Coalescent Structural Deformable Encoder that exploits the distinct characteristics of each pathway, combining DCN and SSM to extract multi-scale features and manage global long-range dependencies. The cognitively-optimized Semantic Enhance Module also integrates and enhances feature outputs, balancing local detail and global information representation through an efficient coding strategy and architectural optimizations. Finally, the Hierarchical Convergence Decoder dynamically fuses features across multiple scales to ensure detailed processing from coarse to fine resolutions. Each of these modules, CSDE, SEM, and HCD, is designed to address the specific complexities encountered in dental X-ray image segmentation, with subsequent sections discussing their design principles and functionalities in detail.

3.1. Coalescent Structural Deformable Encoder (CSDE)

We employ a dual-pathway encoding strategy to address the significant challenges presented by dental X-ray imaging, particularly the density similarity between dental tissues and bone, complex root geometries, and overlapping of adjacent tooth structures. This strategy manifests in the CSDE, which coalesces a pathway based on DCN and another grounded in SSM, as illustrated in Figure 2.
The design philosophy governing the parallel coalescence of the encoder’s dual pathways is rooted in their synergistic interplay. It effectively leverages Deformable Convolution Networks’ adaptive feature extraction capabilities alongside the efficient management of long-range dependencies facilitated by the Spatial Semantic Module. This integration strategy enables sophisticated modulations of the convolutional kernels within DCN, thereby dynamically refining feature maps. Simultaneously, the SSM contributes to the generation of precise spatial feature representations. This design paradigm establishes a robust framework for the coherent amalgamation of local and global information streams, adeptly addressing the complex challenges of dental configurations and morphologies, especially in areas of homogeneous density. The architecture is instrumental in achieving comprehensive macroscopic localization and microscopic delineation of dental boundaries and internal structures, enhancing diagnostic accuracy, and clinical utility.
The lower part of Figure 2 depicts the Adaptive Deformable Pathway, utilizing DCNv3. This advanced convolution technique adjusts the shape and position of the convolutional kernels dynamically through learnable offsets and modulation factors, allowing the model to respond adaptively to local structural deformations and irregularities within dental X-ray images. The pathway initiates with a Stem module that performs successive convolutions, transforming the tri-channel input image into a feature map, subsequently processed through GELU activation functions and normalization layers. This treated feature map then progresses through four hierarchical stages of feature extraction, each stage comprising several base blocks focused on further refining the feature detail. As shown in Figure 3, each base block harnesses DCNv3 as its core operator for effective feature extraction, augmented by grouped convolution techniques to enhance the model’s expressive capacity. Incorporating DropPath [20] technology within these blocks serves to drop connections during training, mitigating overfitting randomly. Each convolutional operation is accompanied by LayerNorm normalization and GELU activation, ensuring data stability and introducing essential non-linearity. Selected blocks include feature scaling to accentuate critical features and enhance recognition capabilities. Following the comprehensive feature extraction across stages, the output feature maps of each stage are downsampled to expand the receptive field and reduce computational demands progressively, culminating in an integrated output sequence that encapsulates multi-scale features from granular to macroscopic levels. The computational process for each stage level can be defined as:
G ( f ) = LN ( MLP ( LN ( DCN ( f ) ) + f ) + LN ( DCN ( f ) ) )
Stage ( i + 1 ) = Iterate ( G , Stage ( i ) , L i )
where G represents each base block, L i denotes the number of base blocks in the i-th stage layer, and Iterate ( G , Stage ( i ) , L i ) indicates the iteration of G across the Stage ( i ) for L i times. DCN stands for Deformable Convolution, and LN for Layer Normalization.
The DCNv3, acting as a dynamic sparse convolution operator within this architecture, processes the input feature map x of dimension C × H × W . Each pixel p i in the output D C N ( p i ) is determined by the weighted sum of multiple sampled points. These sampled points incorporate both position offsets Δ p k and modulation factors m k . Multiple aggregation groups are introduced, each handling a subset of the input feature map and possessing its own sampling offsets and modulation factors. The position offsets allow the convolution kernel to flexibly adjust its sampling locations to fit specific areas of the input features better, while the modulation factors are normalized across all sampled points via a softmax function, regulating the contribution of each sample point. The output for each pixel p i in the DCNv3 is defined as:
D C N ( p i ) = g = 1 G k = 1 K w g m g k x g ( p i + p k + Δ p g k )
where G represents the number of aggregation groups. K is the number of sampled points per group. w g denotes the position-independent projection weights for group g. m g k is the modulation factor for the k th sampled point in group g, normalized via a softmax function. x g is a slice of the input feature map corresponding to the group g. Δ p g k represents the offset for the k th sampled point in group g.
The upper part of Figure 2 showcases the parallel State Space Pathway, which handles spatial features of the image by iteratively updating the state to adapt to and recognize specific dental areas effectively within the dental X-ray images. This application of SSM, typically designed for capturing and modeling dynamic changes in time-series data, is innovatively applied in the spatial dimension for dental image segmentation. It focuses on global features and multi-scale modeling to pre-model dental structural dynamics, thereby improving the model’s focus on critical locations and ameliorating the challenges posed by the compression of three-dimensional structures into a two-dimensional representation typical of X-ray imaging. Within this encoder pathway, layers sequentially process the input through subsampling layers, GSC modules, Mamba layers, and MLP layers, ultimately outputting a processed feature sequence for use by the subsequent decoder. This pathway begins with a combination of a stem layer and multiple TSMamba blocks, aiming for efficient modeling of both multi-scale and global features. In the stem layer, deep convolution with a kernel size of 7 × 7, padding of 3, and stride of 2 extracts the initial scale feature from the input volume I R C × H × W to z 0 R 48 × H 2 × W 2 . Subsequently, z 0 is passed through each TSMamba block and its corresponding subsampling layer. As illustrated in Figure 4, the computational process for each TSMamba block can be defined as:
f m ( l + 1 ) = MLP ( LN ( ToM ( LN ( GSC ( f m ( l ) ) ) + GSC ( f m ( l ) ) ) + ToM ( LN ( GSC ( f m ( l ) ) + GSC ( f m ( l ) ) ) ) )
where GSC represents the Gate-Space Convolution module, ToM denotes the Tri-directional Mamba module, LN is Layer Normalization, and MLP stands for Multi-Layer Perceptron, utilized for enhanced feature representation.
The modified GSC module initially processes 2D features through two convolution blocks, one with a kernel size of 3 × 3 and the other with 1 × 1. Then, these two features undergo element-wise multiplication controlled by a gating mechanism. Subsequently, another convolution block further integrates the features while employing a residual connection to reuse the input feature. Its mathematical expression is:
GSC ( f ) = f + C 3 × 3 C 3 × 3 ( f ) · C 1 x 1 ( f )
where C 3 × 3 and C 1 × 1 denote the corresponding size convolution operations. After processing by the GSC module, features are modeled for global information through the ToM module.
The ToM module within the TSMamba block effectively models high-dimensional global features by flattening 2D input features into three sequences for respective feature interaction, processed through Mamba layers, and then adding the processed sequences to form an integrated output feature. Its formula is:
ToM ( f F , f R , f S ) = δ ( f F ) + δ ( f R ) + δ ( f S )
where f F , f R , f S represent the forward, reverse, and slice-direction sequences, δ represent Mamba.

3.2. Cognitively Optimized Semantic Enhance Module (SEM)

In the present study, we propose a novel network module, the Cognitively Optimized Semantic Enhance Module, engineered to augment feature representation efficacy by synthesizing and enhancing outputs derived from two distinct pathways within the encoder, as illustrated in Figure 5. This module mitigates the complexities of amalgamating high-dimensional feature outputs from disparate sources. Concurrently, it ensures the refinement of features on the initial feature maps by implementing advanced smoothing techniques.
Initially, SEM fuses the high-dimensional outputs of the two encoder pathways by channel-wise concatenation, followed by a comprehensive feature processing initialized by a Stem block. This block incorporates a large convolutional kernel (7 × 7) and subsequent batch normalization and ReLU activation layers, setting the stage for advanced feature manipulation.
The concatenated feature output, represented as F in , is described by the equation:
F in = CBR ( C a t ( X , Y ) )
where CBR denotes the sequence of Convolution (Conv), Batch Normalization (BN), and ReLU activation. X and Y represent the high-dimensional feature outputs from the preceding encoder stages. C a t represents the channel-wise concatenation of the feature maps.
To further enhance the integrated features, SEM employs a Multi-Layer Perceptron (MLP) and a Learnable Vision Center mechanism (LVC) [18], which process the features before another concatenation phase to enrich global and local information processing. The MLP module focuses on capturing global dependencies across the entire image, while the LVC encodes local features, preserving and enhancing local details. The combination of these processes can be formulated as:
S E M ( X , Y ) = C a t ( MLP ( F in ) , LVC ( F in ) )
where S E M ( X , Y ) denotes the final output from the SEM. The inputs MLP ( F in ) and LVC ( F in ) refer to the outputs from the MLP and vision center mechanism, respectively.
The Local Visual Codeword approach achieves cognitive optimization by preserving and enhancing local features. This is done by encoding them using a built-in visual dictionary, where each codeword represents a specific visual concept or pattern. The encoded features F are processed by combining the inherent codebook and scaling factors. The specific enhancement process of the LVC can be expressed with the following equation:
LVC ( F in ) = F in + F in · σ Conv 1 × 1 k = 1 K i = 1 N e α k · ( x i μ k ) 2 j = 1 K e α j · ( x i μ j ) 2
where F in represents the input features. σ is the Sigmoid activation function. x i is the i-th pixel point in the image, representing the feature vector at that location. μ k is the k-th learnable visual codeword. α k is the scaling factor for the k-th codeword. The term ( x i μ k ) 2 denotes the squared Euclidean distance between the i-th pixel point and the k-th visual codeword. K denotes the total number of visual centers or codewords involved in the process. Conv 1 × 1 is a convolutional layer utilizing a 1 × 1 kernel to integrate the feature responses.

3.3. Hierarchical Convergence Decoder (HCD)

Traditional segmentation models often falter when segmenting 2D dental X-ray images due to the tooth’s complex structures and similarity in density and texture to adjacent tissues. These challenges hinder the effective integration of multi-scale information, which is crucial for accurate segmentation. To address this, we introduce a novel neural network module termed the Hierarchical Convergence Decoder (HCD), specifically designed to optimize hierarchical feature fusion, as illustrated in Figure 6.
The HCD module combines multi-dimensional hierarchical features extracted from two pathways of the CSDE and the high-dimensional features enhanced by the SEM. This strategy can preserve various types of feature information while balancing local and global information. The feature fusion strategy within HCD employs a multi-layered decoder architecture, executing through four specially designed resolution levels. Each level integrates convolutional and deconvolutional layers to refine features progressively and to upsample them, equipped with the Triplet Attentional Feature Integration (TAFI) module designed specifically for feature fusion. This module enables the dynamic integration of multi-level features from the encoder and the current processing stage at different decoding phases.
As depicted in Figure 7, TAFI initially performs a dynamic fusion of skip connection features from two pathways of the encoder. This step employs an attention mechanism that emphasizes important information within the input features and suppresses less relevant parts, generating a highly representative intermediate feature. Subsequently, this intermediate feature is concatenated with pre-processed features to retain unique information from each feature type. The process is mathematically described as follows:
output = C a t ( X n · σ ( LoAtt ( X n + Y n ) + GlAtt ( X n + Y n ) ) + Y n · ( 1 σ ( LoAtt ( X n + Y n ) + GlAtt ( X n + Y n ) ) ) , f
where σ is the Sigmoid activation function, X n and Y n represent the features from the n t h layer of the encoder output, f represent features from the upper floor of HCD, LoAtt and GlAtt are the local and global attention modules, respectively, with their operations defined as:
LoAtt ( x ) = BN ( Conv ( ReLU ( BN ( Conv ( x ) ) ) ) )
GlAtt ( x ) = BN ( Conv ( ReLU ( BN ( Conv ( AdaptiveAvgPool ( x ) ) ) ) ) )
Through innovative structural design and hierarchical feature fusion strategies, HCD dynamically adjusts feature fusion strategies across different decoding stages using the TAFI module, effectively utilizing multi-scale information. Hierarchical feature fusion ensures detailed feature processing from coarse to fine, gradually refining global information and capturing details, making the model particularly suited for handling complex, multi-scale information.

4. Experiments and Results

4.1. Dataset

Our dataset for this study originates from the 2023 MICCAI Teeth Segmentation Challenge (https://tianchi.aliyun.com/competition/entrance/532086/information) on 1 May 2024, which was collected by Zhang Y et al. [21]. It comprises two-dimensional panoramic dental X-ray images, each with a resolution of 320 × 640 pixels. These X-ray images are three-channel images, and the labels are binary maps. All images are stored in PNG format. We utilized 2000 labelled images from the preliminary round as our training set. The labelled images in the training set were divided into a training subset and a validation subset in a 9:1 ratio, supporting the model’s training and validation processes. Moreover900, labelled images from the semifinals were employed as the test set to evaluate the model’s generalization capabilities and performance. This segmentation method aims to ensure the full utilization of data and provide a balanced and representative data environment to support the development and evaluation.
In addition to the MICCAI dataset, we further evaluated the generalizability of our model using the Tufts Dental Database. The Tufts Dental Database [22], a new X-ray panoramic radiography image dataset, consists of 1000 panoramic dental radiography images with expert labelling of abnormalities and teeth. The classification of radiography images was performed based on five different levels: anatomical location, peripheral characteristics, radiodensity, effects on the surrounding structure, and the abnormality category. This dataset provided an independent test set for our study, allowing us to assess the model’s performance on a diverse and clinically relevant set of images.

4.2. Evaluation Metrics

In the realm of image segmentation, a multifaceted evaluation strategy is crucial to comprehensively assess the performance of segmentation models. We used a variety of metrics: Dice Similarity Coefficient (DSC), 95% Hausdorff Distance (HD95), Intersection over Union (IoU), Accuracy, Kappa Coefficient, and Matthews Correlation Coefficient (MCC) [23]. Each metric offers distinct insights into the efficacy of the segmentation model under evaluation. By utilizing these metrics in conjunction, it is possible to evaluate the performance of segmentation models from multiple perspectives thoroughly. This approach not only considers the precision in the core and boundary areas of the model but also encompasses the overall predictive accuracy.
The Dice Similarity Coefficient is pivotal for gauging the model’s precision in identifying and delineating the target regions. It calculates the ratio of twice the area of overlap between the predicted and actual segments to the total number of pixels in both the predicted and actual segments:
D S C = 1 N i = 1 N 2 × | X p r e d , i Y t r u e , i | | X p r e d , i | + | Y t r u e , i | × 100 %
where N is the total number of categories, X p r e d , i is the region of the i t h predicted category, and Y t r u e , i is the region of the i t h true category.
The 95% Hausdorff Distance measures the maximum distance of the 95th percentile of the closest points between the model’s predicted boundaries and the actual boundaries, providing insight into the extremities of prediction error:
HD 95 = max max x X min y Y d ( x , y ) , max y Y min x X d ( x , y ) ,
where d ( x , y ) is the Euclidean distance between points x and y.
Intersection over Union evaluates the overall accuracy of the segmentation model by measuring the overlap between the predicted and actual segments:
I o U = 1 N i = 1 N | X p r e d , i Y t r u e , i | | X p r e d , i Y t r u e , i | × 100 %
Accuracy offers a broad measure of the model’s performance across the entire image dataset, calculated by the ratio of correctly predicted pixels to the total pixels:
Accuracy = TP + TN TP + TN + FP + FN .
The Kappa Coefficient assesses the degree of accuracy in prediction, accounting for the chance agreement. This metric is especially useful in imbalanced datasets:
κ = p o p e 1 p e ,
where p o is the observed agreement, and p e is the expected agreement by chance.
Matthews Correlation Coefficient is a balanced measure that considers all four quadrants of the confusion matrix, ideal for evaluating models with imbalanced data classes:
MCC = TP × TN FP × FN ( TP + FP ) ( TP + FN ) ( TN + FP ) ( TN + FN ) .
By integrating these metrics, the evaluation framework not only highlights the segmentation model’s precision and accuracy in core and boundary areas but also ensures robust validation across various challenging scenarios. This comprehensive metric ensemble facilitates a deeper understanding of the model’s strengths and potential areas for improvement.

4.3. Implementation Details

Our framework is implemented based on PyTorch 2.0 and CUDA 11.7 and trained using an NVIDIA GeForce RTX 4090 GPU with 24 GB memory. We employ the Adam optimizer with an initial learning rate of 1 × 10−5 and train for 50 epochs. We utilize the ReduceLROnPlateau learning rate scheduling strategy to accelerate convergence and enhance model generalization. This strategy reduces the learning rate when the validation set loss does not decrease for several consecutive epochs, helping the model escape from local optima. Training is conducted with a batch size of 2. Data augmentation operations are applied to images and labels, including brightness adjustment, gamma transformation, Gaussian noise, scaling, mirror flipping, and spatial transformations [24]. These augmentations randomly alter the image attributes, thus improving the model’s generalization and robustness.

4.4. Loss Function Formulation

In a bid to optimize segmentation accuracy, our model utilized a composite loss function, combining Dice Loss and Cross-Entropy Loss. This hybrid loss function exploits the benefits of both loss types, enhancing training efficacy [25]. Specifically, Dice Loss is formulated as follows:
Dice Loss = 1 2 × | A B | | A | + | B |
where A and B represent the predicted and ground truth segmentation maps, respectively, prioritize the segmentation’s overlap accuracy, which is crucial in handling class imbalances. On the other hand, Cross-Entropy Loss focuses on pixel-wise classification accuracy, calculated by:
CE Loss = 1 N i = 1 N y i log ( y ^ i )
where N is the total number of pixels, y i is the actual label, and y ^ i is the predicted probability. Integrating these two loss functions ensures a balanced focus on detailed pixel-level accuracy and overall segmentation quality, which is critical for the nuanced segmentation tasks required in dental X-ray imaging.

4.5. Comparison with State-of-the-Art Methods

Our network architecture, Deformable Convolution and Mamba Integration Network, integrates three innovative modules: Coalescent Structural Deformable Encoder, Cognitively-Optimized Semantic Enhance Module, and Hierarchical Convergence Decoder, specifically designed for the segmentation of dental X-ray images. To demonstrate our proposed method’s efficacy, we benchmarked it against 14 fully supervised mainstream medical segmentation methods, including OCNet, ICNet, and PSPNet, on the 2023 MICCAI Teeth Segmentation dataset.
The comparative analysis demonstrates that our model surpasses OCNet, PSPNet, and UNet and their variants across multiple key performance indicators. As shown in Table 1, Table 2 and Figure 8, under uniform experimental conditions, our model achieved Dice and IoU scores of 93.38% and 87.81%, respectively, outperforming PSPNet’s scores of 91.97% and 85.27%. These results underscore our model’s heightened sensitivity in identifying dental regions, maintaining high segmentation accuracy even in images with high noise or low contrast. Additionally, our model excelled in accuracy and Kappa index, metrics that reflect correct pixel classification and consistency of classification performance, with scores of 97.45% and 91.78%, respectively, surpassing PSPNet’s scores of 96.86% and 89.99%. Regarding boundary precision, our model registered a 95% Hausdorff distance of 7.494, significantly lower than ICNet’s 8.700 and OCNet’s 9.838. This metric highlights our model’s superior edge localization, which is crucial for maintaining stable and accurate segmentation of dental and surrounding tissues in blurred boundary conditions, especially within complex overlapping areas.
The performance of our network model is attributed to its three structural components. The CSDE module leverages the synergy of DCN and SSM, effectively adapting to local structural deformations and spatial features in dental images, thereby enhancing the model’s adaptability and precision over OCNet and PSPNet when dealing with overlapping and fine structures. The SEM facilitates efficient integration and enhancement of features from different encoders, providing superior detail resolution capabilities compared to PSPNet, which is crucial for accurately distinguishing dental from non-dental areas and aiding dental practitioners in interpreting X-ray images. The HCD module, through its multi-level feature fusion and detailed decoding process, further enhances the model’s resolution and detail expression. This meticulous hierarchical decoding strategy is particularly suited for high-resolution and structurally complex dental X-ray image segmentation. These results affirm our method’s capability in addressing the challenges of 2D dental X-ray image segmentation tasks. The robustness and adaptability of our model were also demonstrated through its ability to accurately segment normal dental structures and anomalies (e.g., missing or malformed teeth), as visually illustrated in Figure 9. This underlines the potential applicability and usefulness of our model in clinical settings.

5. Discussion

5.1. Ablation Experiment

To evaluate the effectiveness of our proposed deep learning model in segmenting dental X-ray images, we conducted a series of ablation experiments, systematically removing key components of the model. The results, presented in Table 3 and Table 4 and Figure 10, not only confirm the importance of each model component but also reveal their individual contributions to the overall performance, particularly in addressing challenges, including compromised image fidelity, obscured delineation of structural boundaries, and the intricate anatomical structures of dental constituents such as pulp, enamel, and dentin.
The proposed architecture incorporates a Coalescent Structural Deformable Encoder, Semantic Enhance Module, and Hierarchical Convergence Decoder with the Triplet Attentional Feature Integration module. In the ablation studies, we evaluated the impact of these modules by individually removing each and observing the resultant performance variations.
The CSDE module integrates two specialized encoders: the Adaptive Deformable Pathway, based on DCN, and the State Space Pathway, based on SSM. This dual-pathway structure leverages the benefits of both approaches. DCN enables the encoder to adaptively modify the shape and position of the convolution kernels in response to local structural deformations in dental X-ray images, thereby enhancing the segmentation accuracy of complex dental structures such as tooth roots and overlapping areas. The State Space Pathway employs state space models to recognize specific dental features effectively and handle nonlinear characteristics and spatial variations within the image properties. Removal of the Adaptive Deformable Pathway from CSDE significantly degraded the model’s performance on detail, such as tooth edges and minor structures, reflected by an increased 95% Hausdorff distance. Additionally, when the State Space Pathway was removed, there was a slight decline in the model’s ability to process global information and maintain image quality, evidenced by minor drops in IOU and Dice coefficients.
The SEM module is designed to enhance and balance feature representation through feature fusion and enhancement. It utilizes an EVC mechanism to optimize the concatenation and enhancement of feature representations from the two output paths of the encoder, facilitating the effective integration of global and local features. The adoption of SEM resulted in improvements in several performance metrics, particularly noticeable in the 95% Hausdorff and IoU scores. The SEM’s use of multilayer perceptrons and a learnable visual center allows it to focus on the overall dental region while enhancing local details such as corner areas, which are critical in handling the complex edges where tooth roots overlap with adjacent tissues.
The HCD incorporates a multi-level feature fusion strategy and the TAFI module, which dynamically integrates multidimensional features from different layers (CSDE and SEM). The TAFI module enhances the representation power of the intermediate features by emphasizing important information and suppressing less relevant details. This coordinated integration of features from the dual-pathway encoders allows the model to handle complex multi-scale information more effectively. Comparative experiments show that our model, including the TAFI module, outperforms the baseline model without TAFI on all three key metrics: Dice Coefficient, IoU, and 95% Hausdorff distance. The Dice Coefficient and IoU improvements can be attributed to the innovative TAFI module’s ability to dynamically integrate multidimensional features, effectively balancing local and global information. Furthermore, a reduction in the 95% Hausdorff distance indicates a decrease in the maximum error in predicting tooth boundary positions, particularly beneficial in scenarios involving blurred boundaries of tooth roots.
These ablation experiments suggest the potential efficacy of each module within our model, indicating their roles in potentially improving the model’s segmentation accuracy and robustness when handling dental X-ray images.

5.2. Clinical Application

In dentistry, integrating deep learning techniques for segmenting 2D dental X-ray images holds promise for enhancing diagnostic accuracy and efficiency in treatment planning. These advanced technologies facilitate the segmentation of dental images, a crucial step in accurately diagnosing conditions such as caries and periodontal disease, and in devising personalized treatment plans. Moreover, by automating the segmentation process, these methods could significantly reduce the manual labor required, potentially improving workflow efficiency and increasing patient throughput in dental practices. Preliminary evidence supporting the efficacy of these technologies includes studies like those by Sheng et al. [44], which focus on optimizing segmentation in panoramic radiographs, and developments such as the STSN-Net architecture that proficiently segments and enumerates teeth, thereby offering potential improvements in clinical operations.

5.3. Clinical Implementation Challenges and Potential Limitations

The realization of high-precision tooth segmentation technology carries substantial significance in clinical settings, particularly in early diagnosis and treatment planning. However, translating this technology into a practical clinical tool entails several challenges. Although our DeMambaNet model excels in managing ambiguous boundaries in dental X-ray images, the complexity of real-world clinical environments may pose limitations. A key challenge is the dependency on high-quality data; discrepancies in data quality can adversely affect model performance, making consistent and high-standard image acquisition crucial for optimal clinical outcomes. When deciding how to implement this technology, medical professionals must consider various factors, from image capture to processing, including variations in imaging equipment and operational techniques.
Moreover, despite the superiority of our approach in handling images with high noise or low contrast boundaries compared to current clinical practices, the practical implementation of new technologies necessitates considerations of cost-effectiveness and personnel training. Additionally, accepting such technologies is critical and requires proper introduction and demonstration within medical teams to ensure seamless integration into everyday workflows.
Furthermore, although our method demonstrates theoretical and experimental advantages, extensive validation in clinical settings is required before widespread adoption. This includes multicenter clinical trials and long-term effectiveness assessments to ensure the technology’s reliability and stability.

6. Conclusions

In the segmentation of dental X-ray images, our study identifies several potential enhancements aimed at addressing a range of technical and medical challenges. These challenges include the diminished quality of images, the imprecise demarcation of structural boundaries, and the intricate anatomical features of dental components such as pulp, enamel, and dentin. Although these proposed improvements are designed to boost the performance of current methodologies, they may not completely overcome all the difficulties. Nonetheless, we believe that our initial investigations may pave the way for future research to further refine and adapt these techniques to more effectively tackle the issues outlined above.
Our novel 2D dental X-ray image segmentation network, Deformable Convolution and Mamba Integration Network, incorporates three groundbreaking modules: the Coalescent Structural Deformable Encoder, the cognitively-optimized Semantic Enhance Module, and the Hierarchical Convergence Decoder. The CSDE combines Deformable Convolution’s adaptive dynamic feature extraction capabilities with Mamba’s spatial feature handling prowess to dynamically simulate and capture the spatial structure characteristics of teeth. The SEM module integrates high-dimensional feature outputs through an efficient encoding strategy, enhancing and balancing feature representation of local details and global information. The HCD employs a layered feature fusion strategy to dynamically integrate and utilize multi-scale information, ensuring detailed processing from coarse to fine scales. Experimental comparisons with 14 baseline models demonstrated our model’s superior performance, achieving the DSC improvement of 0.95% over the best baseline and the HD95 of 7.494, lower than the best baseline score of 7.622. These results illustrate that our approach has advanced State of the Art in handling ambiguous boundaries in 2D dental X-ray image segmentation, especially in images with highly curved and overlapping intersections or high-noise, low-contrast boundaries, potentially transforming diagnostic and treatment workflows in dentistry.
Looking ahead, we plan to refine and optimize our methodology by implementing more flexible skip connection strategies and adjusting the decoder structure to effectively handle the integration of extensive feature information. We also intend to train our model on various pathological states of teeth, such as caries, dental calculus, and pulp disease, which requires the model to accurately segment and identify multiple pathological states to cope with complex clinical scenarios. This ongoing development and adaptation will ensure that our segmentation approach remains at the forefront of dental imaging technology, offering significant clinical benefits and enhancing the precision of dental disease diagnosis and treatment.

Author Contributions

Conceptualization, B.Z. and X.H.; methodology, B.Z. and X.H.; validation, B.Z.; formal analysis, B.Z.; investigation, B.Z. and X.H.; resources, K.J.; data curation, Y.S.; writing—original draft preparation, B.Z.; writing—review and editing, B.Z. and Y.S.; visualization, B.Z. and Y.J.; supervision, Y.S.; project administration, K.J. All authors have read and agreed to the published version of the manuscript.

Funding

This research received no external funding.

Institutional Review Board Statement

Ethical review and approval were waived for this study due to the reason that the Teeth Segmentation images are obtained from a publicly available dataset.

Informed Consent Statement

Patient consent was waived because the Teeth Segmentation images are obtained from a publicly available dataset.

Data Availability Statement

The original data presented in the study are openly available at https://tianchi.aliyun.com/competition/entrance/532086/information (accessed on 14 May 2024).

Conflicts of Interest

The authors declare no conflicts of interest.

Abbreviations

The following abbreviations are used in this manuscript:
DeMambaNetDeformable Convolution and Mamba Integration Network
CSDECoalescent Structural Deformable Encoder
SEMCognitively Optimized Semantic Enhancement Module
HCDHierarchical Convergence Decoder
TAFITriplet Attentional Feature Integration
SSPState Space Pathway
ADPAdaptive Deformable Pathway
DCNDeformable Convolutional Networks
SSMStatistical Shape Models
AFFAttentional Feature Fusion
EVCEfficient Vision Center
MLPMulti-Layer Perceptron
LVCLearnable Vision Center
DSCDice Similarity Coefficient
HD9595% Hausdorff Distance
IoUIntersection over Union
MCCMatthews Correlation Coefficient

References

  1. Seitz, M.W.; Listl, S.; Bartols, A.; Schubert, I.; Blaschke, K.; Haux, C.; van der Zande, M.M. Current knowledge on correlations between highly prevalent dental conditions and chronic diseases: An umbrella review [dataset]. Prev. Chronic Dis. 2019, 16, 180641. [Google Scholar] [CrossRef] [PubMed]
  2. Chen, Y.C.; Chen, M.Y.; Chen, T.Y.; Chan, M.L.; Huang, Y.Y.; Liu, Y.L.; Lee, P.T.; Lin, G.J.; Li, T.F.; Chen, C.A.; et al. Improving dental implant outcomes: CNN-based system accurately measures degree of peri-implantitis damage on periapical film. Bioengineering 2023, 10, 640. [Google Scholar] [CrossRef] [PubMed]
  3. Mao, Y.C.; Chen, T.Y.; Chou, H.S.; Lin, S.Y.; Liu, S.Y.; Chen, Y.A.; Liu, Y.L.; Chen, C.A.; Huang, Y.C.; Chen, S.L.; et al. Caries and restoration detection using bitewing film based on transfer learning with CNNs. Sensors 2021, 21, 4613. [Google Scholar] [CrossRef] [PubMed]
  4. Sivari, E.; Senirkentli, G.B.; Bostanci, E.; Guzel, M.S.; Acici, K.; Asuroglu, T. Deep learning in diagnosis of dental anomalies and diseases: A systematic review. Diagnostics 2023, 13, 2512. [Google Scholar] [CrossRef] [PubMed]
  5. Huang, X.; He, S.; Wang, J.; Yang, S.; Wang, Y.; Ye, X. Lesion detection with fine-grained image categorization for myopic traction maculopathy (MTM) using optical coherence tomography. Med. Phys. 2023, 50, 5398–5409. [Google Scholar] [CrossRef] [PubMed]
  6. Huang, X.; Huang, J.; Zhao, K.; Zhang, T.; Li, Z.; Yue, C.; Chen, W.; Wang, R.; Chen, X.; Zhang, Q.; et al. SASAN: Spectrum-Axial Spatial Approach Networks for Medical Image Segmentation. IEEE Trans. Med. Imaging 2024. [Google Scholar] [CrossRef] [PubMed]
  7. Huang, C.; Wang, J.; Wang, S.; Zhang, Y. A review of deep learning in dentistry. Neurocomputing 2023, 554, 126629. [Google Scholar] [CrossRef]
  8. Majanga, V.; Viriri, S. Dental Images’ Segmentation Using Threshold Connected Component Analysis. Comput. Intell. Neurosci. 2021, 2021, 2921508. [Google Scholar] [CrossRef]
  9. Muresan, M.P.; Barbura, A.R.; Nedevschi, S. Teeth detection and dental problem classification in panoramic X-ray images using deep learning and image processing techniques. In Proceedings of the 2020 IEEE 16th International Conference on Intelligent Computer Communication and Processing (ICCP), Cluj-Napoca, Romania, 3–5 September 2020; pp. 457–463. [Google Scholar]
  10. Li, C.W.; Lin, S.Y.; Chou, H.S.; Chen, T.Y.; Chen, Y.A.; Liu, S.Y.; Liu, Y.L.; Chen, C.A.; Huang, Y.C.; Chen, S.L.; et al. Detection of dental apical lesions using CNNs on periapical radiograph. Sensors 2021, 21, 7049. [Google Scholar] [CrossRef]
  11. Moran, M.; Faria, M.; Giraldi, G.; Bastos, L.; Oliveira, L.; Conci, A. Classification of approximal caries in bitewing radiographs using convolutional neural networks. Sensors 2021, 21, 5192. [Google Scholar] [CrossRef]
  12. Buhari, P.A.M.; Mohideen, K. Deep Learning Approach for Partitioning of Teeth in Panoramic Dental X-ray Images. Int. J. Emerg. Technol. 2020, 11, 154–160. [Google Scholar]
  13. Lin, J.; Huang, X.; Zhou, H.; Wang, Y.; Zhang, Q. Stimulus-guided adaptive transformer network for retinal blood vessel segmentation in fundus images. Med. Image Anal. 2023, 89, 102929. [Google Scholar] [CrossRef] [PubMed]
  14. Huang, X.; Yao, C.; Xu, F.; Chen, L.; Wang, H.; Chen, X.; Ye, J.; Wang, Y. MAC-ResNet: Knowledge distillation based lightweight multiscale-attention-crop-ResNet for eyelid tumors detection and classification. J. Pers. Med. 2022, 13, 89. [Google Scholar] [CrossRef]
  15. Alharbi, S.S.; AlRugaibah, A.A.; Alhasson, H.F.; Khan, R.U. Detection of Cavities from Dental Panoramic X-ray Images Using Nested U-Net Models. Appl. Sci. 2023, 13, 12771. [Google Scholar] [CrossRef]
  16. Xing, Z.; Ye, T.; Yang, Y.; Liu, G.; Zhu, L. SegMamba: Long-range Sequential Modeling Mamba For 3D Medical Image Segmentation. arXiv 2024, arXiv:2401.13560. [Google Scholar]
  17. Wang, W.; Dai, J.; Chen, Z.; Huang, Z.; Li, Z.; Zhu, X.; Hu, X.; Lu, T.; Lu, L.; Li, H.; et al. Internimage: Exploring large-scale vision foundation models with deformable convolutions. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, Vancouver, BC, Canada, 17–24 June 2023; pp. 14408–14419. [Google Scholar]
  18. Quan, Y.; Zhang, D.; Zhang, L.; Tang, J. Centralized Feature Pyramid for Object Detection. IEEE Trans. Image Process. 2023, 32, 4341–4354. [Google Scholar] [CrossRef] [PubMed]
  19. Dai, Y.; Gieseke, F.; Oehmcke, S.; Wu, Y.; Barnard, K. Attentional feature fusion. In Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, Virtual, 5–9 January 2021; pp. 3560–3569. [Google Scholar]
  20. Larsson, G.; Maire, M.; Shakhnarovich, G. Fractalnet: Ultra-deep neural networks without residuals. arXiv 2016, arXiv:1605.07648. [Google Scholar]
  21. Zhang, Y.; Ye, F.; Chen, L.; Xu, F.; Chen, X.; Wu, H.; Cao, M.; Li, Y.; Wang, Y.; Huang, X. Children’s dental panoramic radiographs dataset for caries segmentation and dental disease detection. Sci. Data 2023, 10, 380. [Google Scholar] [CrossRef] [PubMed]
  22. Panetta, K.; Rajendran, R.; Ramesh, A.; Rao, S.P.; Agaian, S. Tufts dental database: A multimodal panoramic X-ray dataset for benchmarking diagnostic systems. IEEE J. Biomed. Health Inform. 2021, 26, 1650–1659. [Google Scholar] [CrossRef]
  23. Huang, X.; Bajaj, R.; Li, Y.; Ye, X.; Lin, J.; Pugliese, F.; Ramasamy, A.; Gu, Y.; Wang, Y.; Torii, R.; et al. POST-IVUS: A perceptual organisation-aware selective transformer framework for intravascular ultrasound segmentation. Med. Image Anal. 2023, 89, 102922. [Google Scholar] [CrossRef]
  24. Sun, Y.; Huang, X.; Zhou, H.; Zhang, Q. SRPN: Similarity-based region proposal networks for nuclei and cells detection in histology images. Med. Image Anal. 2021, 72, 102142. [Google Scholar] [CrossRef] [PubMed]
  25. Huang, X.; Li, Z.; Lou, L.; Dan, R.; Chen, L.; Zeng, G.; Jia, G.; Chen, X.; Jin, Q.; Ye, J.; et al. GOMPS: Global Attention-Based Ophthalmic Image Measurement and Postoperative Appearance Prediction System. Expert Syst. Appl. 2023, 232, 120812. [Google Scholar] [CrossRef]
  26. Ronneberger, O.; Fischer, P.; Brox, T. U-net: Convolutional networks for biomedical image segmentation. In Proceedings of the Medical Image Computing and Computer-Assisted Intervention—MICCAI 2015: 18th International Conference, Munich, Germany, 5–9 October 2015; Proceedings, Part III 18. Springer: Berlin/Heidelberg, Germany, 2015; pp. 234–241. [Google Scholar]
  27. Cai, S.; Tian, Y.; Lui, H.; Zeng, H.; Wu, Y.; Chen, G. Dense-UNet: A novel multiphoton in vivo cellular image segmentation model based on a convolutional neural network. Quant. Imaging Med. Surg. 2020, 10, 1275. [Google Scholar] [CrossRef] [PubMed]
  28. Zhou, Z.; Rahman Siddiquee, M.M.; Tajbakhsh, N.; Liang, J. Unet++: A nested u-net architecture for medical image segmentation. In Proceedings of the Deep Learning in Medical Image Analysis and Multimodal Learning for Clinical Decision Support: 4th International Workshop, DLMIA 2018, and 8th International Workshop, ML-CDS 2018, Held in Conjunction with MICCAI 2018, Granada, Spain, 20 September 2018; Proceedings 4. Springer: Berlin/Heidelberg, Germany, 2018; pp. 3–11. [Google Scholar]
  29. Wang, Z.; Zheng, J.Q.; Zhang, Y.; Cui, G.; Li, L. Mamba-unet: Unet-like pure visual mamba for medical image segmentation. arXiv 2024, arXiv:2402.05079. [Google Scholar]
  30. Chen, J.; Lu, Y.; Yu, Q.; Luo, X.; Adeli, E.; Wang, Y.; Lu, L.; Yuille, A.L.; Zhou, Y. Transunet: Transformers make strong encoders for medical image segmentation. arXiv 2021, arXiv:2102.04306. [Google Scholar]
  31. Oktay, O.; Schlemper, J.; Folgoc, L.L.; Lee, M.; Heinrich, M.; Misawa, K.; Mori, K.; McDonagh, S.; Hammerla, N.Y.; Kainz, B.; et al. Attention u-net: Learning where to look for the pancreas. arXiv 2018, arXiv:1804.03999. [Google Scholar]
  32. Alom, M.Z.; Hasan, M.; Yakopcic, C.; Taha, T.M.; Asari, V.K. Recurrent residual convolutional neural network based on u-net (r2u-net) for medical image segmentation. arXiv 2018, arXiv:1802.06955. [Google Scholar]
  33. Paszke, A.; Chaurasia, A.; Kim, S.; Culurciello, E. Enet: A deep neural network architecture for real-time semantic segmentation. arXiv 2016, arXiv:1606.02147. [Google Scholar]
  34. Zhao, H.; Qi, X.; Shen, X.; Shi, J.; Jia, J. Icnet for real-time semantic segmentation on high-resolution images. In Proceedings of the European Conference on Computer Vision (ECCV), Munich, Germany, 8–14 September 2018; pp. 405–420. [Google Scholar]
  35. Wang, Y.; Zhou, Q.; Liu, J.; Xiong, J.; Gao, G.; Wu, X.; Latecki, L.J. Lednet: A lightweight encoder-decoder network for real-time semantic segmentation. In Proceedings of the 2019 IEEE International Conference on Image Processing (ICIP), Taipei, Taiwan, 22–25 September 2019; pp. 1860–1864. [Google Scholar]
  36. Yuan, Y.; Huang, L.; Guo, J.; Zhang, C.; Chen, X.; Wang, J. Ocnet: Object context network for scene parsing. arXiv 2018, arXiv:1809.00916. [Google Scholar]
  37. Zhao, H.; Shi, J.; Qi, X.; Wang, X.; Jia, J. Pyramid scene parsing network. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Honolulu, HI, USA, 21–27 July 2017; pp. 2881–2890. [Google Scholar]
  38. Badrinarayanan, V.; Kendall, A.; Cipolla, R. Segnet: A deep convolutional encoder-decoder architecture for image segmentation. IEEE Trans. Pattern Anal. Mach. Intell. 2017, 39, 2481–2495. [Google Scholar] [CrossRef]
  39. Ruan, J.; Xiang, S. Vm-unet: Vision mamba unet for medical image segmentation. arXiv 2024, arXiv:2402.02491. [Google Scholar]
  40. Chen, L.C.; Papandreou, G.; Schroff, F.; Adam, H. Rethinking atrous convolution for semantic image segmentation. arXiv 2017, arXiv:1706.05587. [Google Scholar]
  41. Chen, L.C.; Zhu, Y.; Papandreou, G.; Schroff, F.; Adam, H. Encoder-decoder with atrous separable convolution for semantic image segmentation. In Proceedings of the European Conference on Computer Vision (ECCV), Munich, Germany, 8–14 September 2018; pp. 801–818. [Google Scholar]
  42. Isensee, F.; Jaeger, P.F.; Kohl, S.A.; Petersen, J.; Maier-Hein, K.H. nnU-Net: A self-configuring method for deep learning-based biomedical image segmentation. Nat. Methods 2021, 18, 203–211. [Google Scholar] [CrossRef] [PubMed]
  43. Gu, Z.; Cheng, J.; Fu, H.; Zhou, K.; Hao, H.; Zhao, Y.; Zhang, T.; Gao, S.; Liu, J. Ce-net: Context encoder network for 2d medical image segmentation. IEEE Trans. Med. Imaging 2019, 38, 2281–2292. [Google Scholar] [CrossRef] [PubMed]
  44. Wang, S.; Liang, S.; Chang, Q.; Zhang, L.; Gong, B.; Bai, Y.; Zuo, F.; Wang, Y.; Xie, X.; Gu, Y. STSN-Net: Simultaneous Tooth Segmentation and Numbering Method in Crowded Environments with Deep Learning. Diagnostics 2024, 14, 497. [Google Scholar] [CrossRef]
Figure 1. Schematic representation of the Deformable Convolution and Mamba Integration Network (DeMambaNet), integrating a Coalescent Structural Deformable Encoder, a Cognitively-Optimized Semantic Enhance Module, and a Hierarchical Convergence Decoder.
Figure 1. Schematic representation of the Deformable Convolution and Mamba Integration Network (DeMambaNet), integrating a Coalescent Structural Deformable Encoder, a Cognitively-Optimized Semantic Enhance Module, and a Hierarchical Convergence Decoder.
Sensors 24 04748 g001
Figure 2. Schematic representation of the CSDE, which integrates a State Space Pathway, based on SSM, in the upper section, and an Adaptive Deformable Pathway, based on DCN, in the lower section.
Figure 2. Schematic representation of the CSDE, which integrates a State Space Pathway, based on SSM, in the upper section, and an Adaptive Deformable Pathway, based on DCN, in the lower section.
Sensors 24 04748 g002
Figure 3. The schematic depiction of each hierarchical stage, composed of DCNv3, LN, and MLP, utilizes DCNv3 as its core operator for efficient feature extraction.
Figure 3. The schematic depiction of each hierarchical stage, composed of DCNv3, LN, and MLP, utilizes DCNv3 as its core operator for efficient feature extraction.
Sensors 24 04748 g003
Figure 4. The schematic depiction of the TSMamba block involves GSC, ToM, LN, and MLP, collectively enhancing input feature processing and representation.
Figure 4. The schematic depiction of the TSMamba block involves GSC, ToM, LN, and MLP, collectively enhancing input feature processing and representation.
Sensors 24 04748 g004
Figure 5. The schematic depiction of the SEM, which combines encoder outputs through concatenation, applies Conv, BN, and ReLU and then enhances features with the MLP and LVC. The MLP captures global dependencies, while the LVC focuses on local details.
Figure 5. The schematic depiction of the SEM, which combines encoder outputs through concatenation, applies Conv, BN, and ReLU and then enhances features with the MLP and LVC. The MLP captures global dependencies, while the LVC focuses on local details.
Sensors 24 04748 g005
Figure 6. The schematic illustrates the HCD, incorporating a multi-layered decoder structure. Each tier combines convolutional and deconvolutional layers for feature enhancement and upsampling, and it is equipped with the TAFI designed specifically for feature fusion.
Figure 6. The schematic illustrates the HCD, incorporating a multi-layered decoder structure. Each tier combines convolutional and deconvolutional layers for feature enhancement and upsampling, and it is equipped with the TAFI designed specifically for feature fusion.
Sensors 24 04748 g006
Figure 7. Schematic representation of the TAFI, which combines features from the encoder’s two pathways and uses local and global attention modules to emphasize important information.
Figure 7. Schematic representation of the TAFI, which combines features from the encoder’s two pathways and uses local and global attention modules to emphasize important information.
Sensors 24 04748 g007
Figure 8. Box plot showcasing the evaluation metrics index from training results. On the x-axis, the models are labeled as follows: (a) ENet; (b) ICNet; (c) LEDNet; (d) OCNet; (e) PSPNet; (f) SegNet; (g) VM-UNet; (h) Attention U-Net; (i) R2U-Net; (j) UNet; (k) UNet++; (l) TransUNet; (m) Dense-UNet; (n) Mamba-UNet; (o) DeMambaNet (ours).
Figure 8. Box plot showcasing the evaluation metrics index from training results. On the x-axis, the models are labeled as follows: (a) ENet; (b) ICNet; (c) LEDNet; (d) OCNet; (e) PSPNet; (f) SegNet; (g) VM-UNet; (h) Attention U-Net; (i) R2U-Net; (j) UNet; (k) UNet++; (l) TransUNet; (m) Dense-UNet; (n) Mamba-UNet; (o) DeMambaNet (ours).
Sensors 24 04748 g008
Figure 9. A few segmentation results of comparison between our proposed method and the existing state-of-the-art models. The segmentation result of the teeth is shown in green. The red dashed line represents the ground truth.
Figure 9. A few segmentation results of comparison between our proposed method and the existing state-of-the-art models. The segmentation result of the teeth is shown in green. The red dashed line represents the ground truth.
Sensors 24 04748 g009
Figure 10. Box plot showcasing the evaluation metrics index of ablation experiments. On the x-axis, the models are labeled as follows: (a) w/o SSP; (b) w/o ADP; (c) w/o TAFI; (d) w/o SEM; (e) DeMambaNet (ours).
Figure 10. Box plot showcasing the evaluation metrics index of ablation experiments. On the x-axis, the models are labeled as follows: (a) w/o SSP; (b) w/o ADP; (c) w/o TAFI; (d) w/o SEM; (e) DeMambaNet (ours).
Sensors 24 04748 g010
Table 1. The comparison of our proposed method with state-of-the-art methods on the evaluation metrics in the 2023 MICCAI Teeth Segmentation Dataset. The best values for each metric are highlighted in red, while the second-best values are highlighted in blue.
Table 1. The comparison of our proposed method with state-of-the-art methods on the evaluation metrics in the 2023 MICCAI Teeth Segmentation Dataset. The best values for each metric are highlighted in red, while the second-best values are highlighted in blue.
ModelDice (%) ↑IoU (%) ↑95 Hausdorff ↓Accuracy (%) ↑Kappa (%) ↑MCC (%) ↑GFLOPS ↓Params (MB) ↓
UNet [26] 92.84 ± 4.54 86.86 ± 5.28 7.633 ± 1.144 97.22 ± 0.84 91.10 ± 4.69 91.17 ± 6.12 302.0729.6
Dense-UNet [27] 92.55 ± 4.94 86.38 ± 5.61 7.920 ± 1.145 97.16 ± 0.91 90.77 ± 5.11 90.80 ± 6.60 497.4117.51
UNet++ [28] 91.76 ± 5.36 85.07 ± 6.31 8.136 ± 1.115 96.83 ± 0.97 89.78 ± 5.57 89.90 ± 6.85 217.258.74
Mamba-UNet [29] 91.75 ± 4.74 84.98 ± 5.13 8.393 ± 1.118 96.82 ± 0.76 89.76 ± 4.79 90.00 ± 4.74 17.619.53
TransUNet [30] 90.87 ± 6.57 83.67 ± 6.94 8.423 ± 1.041 96.49 ± 0.87 88.67 ± 6.60 89.05 ± 6.50 147.5164.72
Attention U-Net [31] 87.11 ± 6.65 77.56 ± 7.17 10.315 ± 1.068 95.09 ± 1.00 84.04 ± 6.67 84.34 ± 6.61 416.7733.26
R2U-Net [32] 85.81 ± 8.54 75.91 ± 10.37 10.311 ± 1.476 95.19 ± 1.86 82.95 ± 9.03 83.55 ± 8.89 240.229.33
ENet [33] 92.33 ± 4.77 85.98 ± 5.21 8.101 ± 1.073 97.03 ± 0.82 90.46 ± 4.87 90.50 ± 6.42 3.220.34
ICNet [34] 91.42 ± 4.84 84.43 ± 5.33 8.700 ± 0.982 96.80 ± 0.77 89.42 ± 4.90 89.32 ± 6.42 57.8826.98
LEDNet [35] 90.84 ± 4.68 83.43 ± 5.00 8.874 ± 0.967 96.41 ± 0.74 88.59 ± 4.70 88.68 ± 6.23 9.922.21
OCNet [36] 87.47 ± 6.05 78.09 ± 6.96 9.838 ± 1.110 94.92 ± 1.25 84.29 ± 6.23 84.78 ± 7.06 367.6352.48
PSPNet [37] 91.97 ± 3.63 85.27 ± 4.17 8.361 ± 0.977 96.86 ± 0.68 90.00 ± 3.68 90.03 ± 5.55 288.9546.5
SegNet [38] 92.33 ± 3.73 85.91 ± 4.44 8.071 ± 1.000 97.00 ± 0.72 90.45 ± 3.84 90.51 ± 5.65 251.2428.08
VM-UNet [39] 92.24 ± 3.90 85.75 ± 4.66 8.200 ± 1.094 96.98 ± 0.78 90.34 ± 4.02 90.47 ± 4.94 33.4126.16
DeMambaNet (ours) 93.38 ± 4.80 87.81 ± 5.30 7.494 ± 1.165 97.45 ± 0.84 91.78 ± 4.93 91.98 ± 4.87 216.2441.25
Table 2. The comparison of our proposed method with methods on the evaluation metrics in the Tufts Dental Database. The best values for each metric are highlighted in red, while the second-best values are highlighted in blue.
Table 2. The comparison of our proposed method with methods on the evaluation metrics in the Tufts Dental Database. The best values for each metric are highlighted in red, while the second-best values are highlighted in blue.
ModelBackboneDice (%) ↑IoU (%) ↑Accuracy (%) ↑
UNet [26]-91.2684.0998.04
PSPNet [37]ResNet1891.4985.6694.76
DeepLabV3 [40]ResNet1891.8786.0294.91
DeepLabV3+ [41]ResNet1891.80 86.41 95.13
nnUNet [42]-90.8686.1194.91
CE-Net [43]-86.6281.6492.67
DeMambaNet (ours)- 92.11 85.50 98.20
Table 3. We conducted ablation experiments on several key modules to assess their impact on overall performance. These modules include the State Space Pathway (SSP) and the Adaptive Deformable Pathway (ADP) in CSDE, SEM, and the TAFI module in HCD. We could quantify each module’s contribution by individually removing each module and observing performance changes. The best values for each metric are highlighted in red, while the second-best values are highlighted in blue. ↑indicates that higher values are better, ↓indicates that lower values are better.
Table 3. We conducted ablation experiments on several key modules to assess their impact on overall performance. These modules include the State Space Pathway (SSP) and the Adaptive Deformable Pathway (ADP) in CSDE, SEM, and the TAFI module in HCD. We could quantify each module’s contribution by individually removing each module and observing performance changes. The best values for each metric are highlighted in red, while the second-best values are highlighted in blue. ↑indicates that higher values are better, ↓indicates that lower values are better.
SSPADPTAFISEMDice (%) ↑IoU (%) ↑95 Hausdorff ↓Accuracy (%) ↑Kappa (%) ↑MCC (%) ↑
\92.37 ± 4.8386.05 ± 5.348.01 ± 1.1897.09 ± 0.8490.55 ± 4.9490.75 ± 4.89
\91.91 ± 6.1185.39 ± 6.438.14 ± 1.1196.93 ± 0.7689.99 ± 6.1290.28 ± 5.94
\92.96 ± 4.8587.08 ± 5.457.66 ± 1.1397.27 ± 0.8591.25 ± 4.9891.49 ± 4.90
\93.24 ± 4.8087.56 ± 5.337.55 ± 1.1597.38 ± 0.8491.60 ± 4.9491.81 ± 4.88
93.38 ± 4.8087.81 ± 5.307.49 ± 1.1797.45 ± 0.8491.78 ± 4.9391.98 ± 4.87
Table 4. To demonstrate the contribution of the TAFI module in preserving diversity in feature representation across each layer of the decoder, we conducted an ablation study by selectively removing the TAFI from each layer individually. This approach allowed us to quantify the specific impact of TAFI at different stages of the decoding process, thereby highlighting its effectiveness in maintaining diverse feature representations. The best values for each metric are highlighted in red, while the second-best values are highlighted in blue. ↑indicates that higher values are better, ↓indicates that lower values are better.
Table 4. To demonstrate the contribution of the TAFI module in preserving diversity in feature representation across each layer of the decoder, we conducted an ablation study by selectively removing the TAFI from each layer individually. This approach allowed us to quantify the specific impact of TAFI at different stages of the decoding process, thereby highlighting its effectiveness in maintaining diverse feature representations. The best values for each metric are highlighted in red, while the second-best values are highlighted in blue. ↑indicates that higher values are better, ↓indicates that lower values are better.
TAFI1TAFI2TAFI3Dice (%) ↑IoU (%) ↑95 Hausdorff ↓Accuracy (%) ↑Kappa (%) ↑MCC (%) ↑
\93.26 ± 5.6887.67 ± 5.937.52 ± 1.1497.45 ± 0.8091.66 ± 5.7491.84 ± 5.71
\93.02 ± 4.7287.16 ± 5.107.67 ± 1.0997.31 ± 0.7691.33 ± 4.8191.54 ± 4.76
\93.36 ± 4.7687.76 ± 5.187.51 ± 1.1697.46 ± 0.8091.77 ± 4.8691.94 ± 4.82
93.38 ± 4.8087.81 ± 5.307.49 ± 1.1797.45 ± 0.8491.78 ± 4.9391.98 ± 4.87
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Zou, B.; Huang, X.; Jiang, Y.; Jin, K.; Sun, Y. DeMambaNet: Deformable Convolution and Mamba Integration Network for High-Precision Segmentation of Ambiguously Defined Dental Radicular Boundaries. Sensors 2024, 24, 4748. https://doi.org/10.3390/s24144748

AMA Style

Zou B, Huang X, Jiang Y, Jin K, Sun Y. DeMambaNet: Deformable Convolution and Mamba Integration Network for High-Precision Segmentation of Ambiguously Defined Dental Radicular Boundaries. Sensors. 2024; 24(14):4748. https://doi.org/10.3390/s24144748

Chicago/Turabian Style

Zou, Binfeng, Xingru Huang, Yitao Jiang, Kai Jin, and Yaoqi Sun. 2024. "DeMambaNet: Deformable Convolution and Mamba Integration Network for High-Precision Segmentation of Ambiguously Defined Dental Radicular Boundaries" Sensors 24, no. 14: 4748. https://doi.org/10.3390/s24144748

APA Style

Zou, B., Huang, X., Jiang, Y., Jin, K., & Sun, Y. (2024). DeMambaNet: Deformable Convolution and Mamba Integration Network for High-Precision Segmentation of Ambiguously Defined Dental Radicular Boundaries. Sensors, 24(14), 4748. https://doi.org/10.3390/s24144748

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop