[go: up one dir, main page]
More Web Proxy on the site http://driver.im/

CN118365964B - Deep learning-based recognition method for tooth position and periapical periodontitis of oral cavity curved surface fracture layer - Google Patents

Deep learning-based recognition method for tooth position and periapical periodontitis of oral cavity curved surface fracture layer Download PDF

Info

Publication number
CN118365964B
CN118365964B CN202410766284.9A CN202410766284A CN118365964B CN 118365964 B CN118365964 B CN 118365964B CN 202410766284 A CN202410766284 A CN 202410766284A CN 118365964 B CN118365964 B CN 118365964B
Authority
CN
China
Prior art keywords
deep learning
model
data
curved surface
oral cavity
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202410766284.9A
Other languages
Chinese (zh)
Other versions
CN118365964A (en
Inventor
王胜朝
蒋文凯
张轶丹
孔维阳
张倩霞
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Air Force Medical University of PLA
Original Assignee
Air Force Medical University of PLA
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Air Force Medical University of PLA filed Critical Air Force Medical University of PLA
Priority to CN202410766284.9A priority Critical patent/CN118365964B/en
Publication of CN118365964A publication Critical patent/CN118365964A/en
Application granted granted Critical
Publication of CN118365964B publication Critical patent/CN118365964B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/764Arrangements for image or video recognition or understanding using pattern recognition or machine learning using classification, e.g. of video objects
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/044Recurrent networks, e.g. Hopfield networks
    • G06N3/0442Recurrent networks, e.g. Hopfield networks characterised by memory or gating, e.g. long short-term memory [LSTM] or gated recurrent units [GRU]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/0464Convolutional networks [CNN, ConvNet]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/20Image preprocessing
    • G06V10/26Segmentation of patterns in the image field; Cutting or merging of image elements to establish the pattern region, e.g. clustering-based techniques; Detection of occlusion
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • G06V10/44Local feature extraction by analysis of parts of the pattern, e.g. by detecting edges, contours, loops, corners, strokes or intersections; Connectivity analysis, e.g. of connected components
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/77Processing image or video features in feature spaces; using data integration or data reduction, e.g. principal component analysis [PCA] or independent component analysis [ICA] or self-organising maps [SOM]; Blind source separation
    • G06V10/774Generating sets of training patterns; Bootstrap methods, e.g. bagging or boosting
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/82Arrangements for image or video recognition or understanding using pattern recognition or machine learning using neural networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/70Labelling scene content, e.g. deriving syntactic or semantic representations
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V2201/00Indexing scheme relating to image or video recognition or understanding
    • G06V2201/03Recognition of patterns in medical or anatomical images
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02TCLIMATE CHANGE MITIGATION TECHNOLOGIES RELATED TO TRANSPORTATION
    • Y02T10/00Road transport of goods or passengers
    • Y02T10/10Internal combustion engine [ICE] based vehicles
    • Y02T10/40Engine management systems

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Evolutionary Computation (AREA)
  • Computing Systems (AREA)
  • Health & Medical Sciences (AREA)
  • Artificial Intelligence (AREA)
  • Software Systems (AREA)
  • General Health & Medical Sciences (AREA)
  • Multimedia (AREA)
  • Computational Linguistics (AREA)
  • Biophysics (AREA)
  • General Engineering & Computer Science (AREA)
  • Molecular Biology (AREA)
  • Mathematical Physics (AREA)
  • Data Mining & Analysis (AREA)
  • Biomedical Technology (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Databases & Information Systems (AREA)
  • Medical Informatics (AREA)
  • Image Analysis (AREA)
  • Image Processing (AREA)

Abstract

The application discloses a deep learning-based recognition method for tooth positions and periapical periodontitis of an oral cavity curved surface fracture layer, which comprises the following steps: establishing an oral cavity curved surface broken layer sheet data set; training a deep learning contour segmentation model by adopting an oral cavity curved surface fracture slice data set; inputting the data set of the mouth curved surface fracture slice into a trained deep learning contour segmentation model, outputting contour data of tooth positions and periapical periodontitis, and converting the contour data characteristics into sequence data to obtain a sequence data set; training a deep learning sequence classification model by adopting a sequence data set; and carrying out actual tooth position and periapical periodontitis identification by adopting a trained deep learning contour segmentation model and a trained deep learning sequence classification model. According to the application, the deep learning contour segmentation model and the deep learning sequence classification model are combined, the contour data features are converted into the sequence data, and then secondary recognition is performed, so that the recognition accuracy and the model processing efficiency are improved.

Description

Deep learning-based recognition method for tooth position and periapical periodontitis of oral cavity curved surface fracture layer
Technical Field
The invention relates to the technical field of image recognition, in particular to a deep learning-based recognition method for tooth positions and periapical periodontitis of an oral cavity curved surface fracture layer.
Background
Deep learning, an important branch of AI technology, has been successfully applied to various types of medical image processing, including but not limited to X-ray, CT, MRI, and analysis of ultrasound images. Deep-learning image recognition techniques for dental images currently exist for providing analytical references to doctors.
In the prior art, patent publication No. CN116363083A discloses a mixed model-based caries detection method, system and readable storage medium, an alveolar bone detection model based on a deep learning target detection algorithm is adopted, a local alveolar bone region is extracted, the detection range is narrowed for a subsequent caries detection model and a dental detection model, and further the model detection efficiency is improved; two independent deep learning target segmentation models with the same structure are utilized to optimize a traditional detection head by utilizing a self-attention mechanism, a traditional convolution layer is replaced by a self-attention layer, and finally the accuracy of a tooth example and a decayed tooth example is improved; the confidence coefficient of different detection processes aiming at the same target is reserved by utilizing a specific result fusion algorithm, and a more accurate caries detection result is obtained by accumulating the weight of the confidence coefficient.
When the image data contains sequence features, the sequence features are considered to be combined for recognition, so that the recognition accuracy is improved, however, the prior art does not study the aspect, the sequence features in the image data cannot be effectively utilized, and the recognition accuracy is poor; in addition, since the several deep learning models in the prior art process the image data, there is a problem that the model processing efficiency is low.
Disclosure of Invention
The invention provides a deep learning-based method for identifying tooth positions and periapical periodontitis of an oral cavity curved surface fracture layer, which is used for solving the problems that in the prior art, the accuracy of identifying tooth positions and periapical periodontitis is low and the model processing efficiency is low.
On one hand, the invention provides a method for identifying tooth positions and periapical periodontitis of an oral cavity curved surface fracture layer based on deep learning, which comprises the following steps:
step one, an oral cavity curved surface broken layer sheet data set is established.
Establishing a deep learning contour segmentation model, and training the deep learning contour segmentation model by adopting the oral cavity curved surface fracture slice data set to obtain a trained deep learning contour segmentation model.
Inputting the data set of the oral cavity curved surface fracture slice into a trained deep learning contour segmentation model, outputting contour data of tooth positions and periapical periodontitis, and converting the contour data characteristics into sequence data to obtain a sequence data set.
And step four, establishing a deep learning sequence classification model, and training the deep learning sequence classification model by adopting the sequence data set to obtain a trained deep learning sequence classification model.
And fifthly, carrying out actual tooth position and periapical periodontitis identification by adopting a trained deep learning contour segmentation model and a trained deep learning sequence classification model.
In one possible implementation, the step one includes:
Collecting oral cavity curved surface tomogram data.
Classifying and marking the oral cavity curved surface tomogram data to obtain an oral cavity curved surface tomogram data set.
In a possible implementation manner, in the second step, the Mask-RCNN model is adopted as the deep learning contour segmentation model.
In a possible implementation manner, in the second step, in the training process of the deep learning contour segmentation model by adopting the oral cavity curved surface fracture slice data set, the learning rate is adjusted according to a preset step length and an attenuation factor.
In one possible implementation manner, in the third step, the outputting the outline data of the dental site and periapical periodontitis, and converting the outline data feature into the sequence data includes:
And carrying out coordinate deconstruction on the tooth position and periapical periodontitis outline data to obtain outline characteristic coordinate data, namely the sequence data.
In a possible implementation manner, in the third step, contour data of the dental site and periapical periodontitis is output, and after the contour data features are converted into sequence data, the sequence data is subjected to standardization processing.
In a possible implementation manner, in the fourth step, the deep learning sequence classification model adopts a BiLSTM model.
In a possible implementation manner, in step four, the BiLSTM model performs model weight optimization by using Adam optimizer during training.
In a possible implementation manner, in the fourth step, the BiLSTM model adopts a cross entropy loss function in the training process.
The recognition method for the tooth position and periapical periodontitis of the oral cavity curved surface fracture layer based on deep learning has the following advantages:
By combining the deep learning contour segmentation model and the deep learning sequence classification model, the contour data features are converted into sequence data and then secondary recognition is carried out, so that the recognition accuracy and the model processing efficiency are improved.
The proposed deep learning contour segmentation model adopts a Mask-RCNN model, so that the precision of contour segmentation is improved.
In the process of training the deep learning contour segmentation model by adopting the oral cavity curved surface fracture slice data set, the learning rate is adjusted according to the preset step length and the attenuation factor, so that the model training convergence speed is improved.
The contour feature coordinate data, namely the sequence data, is obtained by carrying out coordinate deconstruction on the tooth position and periapical periodontitis contour data, and the image recognition is converted into the sequence recognition, so that the accuracy and the efficiency of the subsequent recognition are improved.
By carrying out standardized processing on the sequence data, the dimensional difference among different features is eliminated, so that the influence of each feature on the result is more balanced.
The proposed deep learning sequence classification model adopts BiLSTM model, which can better capture the context relation in the sequence data, thereby improving the recognition accuracy.
Model weight optimization is carried out by adopting an Adam optimizer, and model convergence speed is improved by momentum and self-adaptive learning rate.
Drawings
In order to more clearly illustrate the embodiments of the invention or the technical solutions in the prior art, the drawings that are required in the embodiments or the description of the prior art will be briefly described, it being obvious that the drawings in the following description are only some embodiments of the invention, and that other drawings may be obtained according to these drawings without inventive effort for a person skilled in the art.
Fig. 1 is a flow chart of a method for identifying tooth positions and periapical periodontitis of an oral cavity curved surface fracture layer based on deep learning according to an embodiment of the invention.
Detailed Description
The following description of the embodiments of the present invention will be made clearly and completely with reference to the accompanying drawings, in which it is apparent that the embodiments described are only some embodiments of the present invention, but not all embodiments. All other embodiments, which can be made by those skilled in the art based on the embodiments of the invention without making any inventive effort, are intended to be within the scope of the invention.
As shown in fig. 1, the embodiment of the invention provides a method for identifying tooth positions and periapical periodontitis of an oral cavity curved surface fracture based on deep learning, which comprises the following steps:
step one, an oral cavity curved surface broken layer sheet data set is established.
Establishing a deep learning contour segmentation model, and training the deep learning contour segmentation model by adopting the oral cavity curved surface fracture slice data set to obtain a trained deep learning contour segmentation model.
Inputting the data set of the oral cavity curved surface fracture slice into a trained deep learning contour segmentation model, outputting contour data of tooth positions and periapical periodontitis, and converting the contour data characteristics into sequence data to obtain a sequence data set.
And step four, establishing a deep learning sequence classification model, and training the deep learning sequence classification model by adopting the sequence data set to obtain a trained deep learning sequence classification model.
And fifthly, carrying out actual tooth position and periapical periodontitis identification by adopting a trained deep learning contour segmentation model and a trained deep learning sequence classification model.
Illustratively, step one comprises:
Collecting oral cavity curved surface tomogram data.
Classifying and marking the oral cavity curved surface tomogram data to obtain an oral cavity curved surface tomogram data set.
Specifically, the present embodiment collects 3500 pieces of oral curved surface tomogram data in total, wherein the effective data (image containing at least one wisdom tooth) is 2606 pieces in total. Of the valid data, 1693 sheets are provided by the hospital and 913 sheets are obtained for the common data set.
The flow of classifying the data of the fracture sheet of the oral curved surface in the embodiment is as follows: according to the FDI dental level representation method, the dental level in the mouth curved surface fractured layer data is represented as "11, 12, 13, 14, 15, 16, 17, 18, 21, 22, 23, 24, 25, 26, 27, 28, 31, 32, 33, 34, 35, 36, 37, 38, 41, 42, 43, 44, 45, 46, 47, 48", and the dental level of 32 teeth in total is classified according to the dental level. Meanwhile, when the periapical periodontitis region is contained in the oral curved surface tomogram data, the region is classified as "GJZY".
The embodiment also uses Labelimg image marking tools to mark the classified oral cavity curved surface tomogram data according to the category. By adopting a manual labeling mode, complex disc inspection is carried out by a professional doctor after labeling is completed, repeated inspection is carried out to determine that labeling is not repeated, labeling is not missed, standard uniformity of category division is ensured, 96281 areas are labeled in total, wherein 285 periapical periodontitis areas are provided, and the rest areas are dental areas.
Illustratively, in the second step, the Mask-RCNN model is adopted as the deep learning contour segmentation model.
Specifically, in this embodiment, a pre-trained resnet network is used as a feature extraction network of the Mask-RCNN model, so that contours of a dental region and a periapical periodontitis region in the surface segmentation slice data of the exit cavity can be accurately identified, and classification and identification of the dental region and the periapical periodontitis region are realized through a subsequent deep learning sequence classification model.
Specifically, the Mask-RCNN model is evolved from the fast-RCNN model, and is also subjected to pooling operation of the RoI alignment pooling layer to obtain training data of the head network, wherein input data of the head network firstly passes through a two-layer convolution network and then is input into two full-connection layers, and then is input into a softmax layer and a linear regression activation layer respectively. Wherein the softmax layer is trained using a cross entropy loss function and the linear regression layer is trained using a SmoothL loss function. This is consistent with the training of the Anchor in the RPN network.
In the second step, the learning rate is adjusted according to a preset step length and an attenuation factor in the training process of the deep learning contour segmentation model by adopting the oral cavity curved surface fracture slice data set.
Specifically, in the present embodiment, the preset step length of the learning rate adjustment is: at the end of the 16 th and 22 th training iterations, the learning rate is attenuated by an attenuation factor set to 0.1. The model is finely adjusted by using a larger learning rate to quickly converge in the initial stage of training and gradually reducing the learning rate in the later stage of training, so that the overall performance of the model can be improved.
Illustratively, in the third step, the outputting the contour data of the dental site and periapical periodontitis, and converting the contour data feature into the sequence data includes:
And carrying out coordinate deconstruction on the tooth position and periapical periodontitis outline data to obtain outline characteristic coordinate data, namely the sequence data.
Specifically, coordinate deconstructing is performed on the tooth position and periapical periodontitis contour data, and the contour feature coordinate data converted by each image in the tooth position and periapical periodontitis contour data comprises the coordinate positions of contour points in the image. A series of features can be obtained by analyzing the coordinate locations. Such as: calculating the average value of the coordinate positions of all points of a contour to obtain the central coordinate position of the contour; calculating Y-axis coordinates of the central coordinate positions of all the contours, and obtaining whether a certain contour belongs to an upper row or a lower row according to the distribution rule of the coordinate positions; calculating the area of one contour, comparing with the areas of other contours, and the like.
In this embodiment, the profile-feature sequence data contains 10 features as follows:
The center coordinate position is the X-axis coordinate, and the characteristic reflects the distribution of the overall coordinate of a certain contour in the left-right direction.
The center coordinate position is Y-axis coordinate, and the feature reflects the distribution of the overall coordinates of a certain contour in the up-down direction.
The areas of the outline, different dental areas and periapical periodontitis areas are regular.
Whether there is a left-hand contour, this feature may reflect whether there is a contour distribution on the left.
Whether there is a right-hand contour, the feature may reflect whether there is a contour distribution on the right side.
If the Y axis is higher, this feature can help determine if the profile's teeth are in the upper row.
The characteristic of the X-axis maximum coordinate point can reflect the arrangement rule of a certain contour and the left and right side contours.
The minimum coordinate point of the X axis can reflect the arrangement rule of a certain contour and the contours on the left side and the right side.
The characteristic of the Y-axis maximum coordinate point can reflect the arrangement rule of a certain contour and the upper and lower azimuth contours.
The minimum coordinate point of the Y axis can reflect the arrangement rule of a certain contour and the upper and lower azimuth contours.
In the third step, the contour data of the dental site and the periapical periodontitis are output, the contour data features are converted into sequence data, and then the sequence data is subjected to standardization processing.
Specifically, the normalization process scales each feature of the profile feature coordinate data (i.e., the sequence data) to a range where the mean value is 0 and the standard deviation is 1. The following formula is shown:
Where x represents the raw data value, u represents the mean of all the raw data values of the feature, σ represents the standard deviation of all the raw data values of the feature, and Z represents the normalized data value. By standardizing each piece of data of each feature of the profile feature coordinate data, the data distribution of each feature takes 0 as the center, and the standard deviation is 1, so that dimension differences among different features can be eliminated, the influence of each feature on the result is more balanced, and the model training efficiency and effect are improved.
Illustratively, in the fourth step, the deep learning sequence classification model adopts a BiLSTM model.
Specifically, biLSTM model is a recurrent neural network for processing sequence data. The BiLSTM model consists of two LSTM models, including a forward model that reads input sequence data in time order and a reverse model that reads input sequence data in reverse order, the bi-directional design enabling the BiLSTM model to take into account both past and future context information.
Specifically, the LSTM model contains three main gate control structures: forget gate, input gate and output gate.
The calculation formula of the forgetting gate is as follows:
Wherein, Representing the output of the forgetting gate,The activation function representing the sigmoid is presented,A weight matrix representing the forgetting gate,Indicating the hidden state of the last time step,An input representing the current time step is presented,A bias term representing a forget gate.
The input gate is calculated as follows:
Wherein, The sigmoid activation portion representing the input gate,Representing candidates for the input gate, the candidates being obtained by a tanh activation function,AndThe weight matrices representing the input gates and candidate values respectively,AndThe weight matrix bias terms respectively represent the input gates and the candidate values.
The output gate is calculated as follows:
Wherein, Representing the output of the output gate,Representing the weight matrix of the output gate,Representing the bias term of the output gate,Representing the state of the cell at the current time step,Representing the hidden state of the current time step.
Illustratively, in the fourth step, the BiLSTM model performs model weight optimization using Adam optimizer during training.
Specifically, the Adam optimizer combines the advantages of the Momentum optimizer and RMSprop optimizers, adjusting the model weights to minimize the loss function. The update rules used by Adam optimizers are more complex than standard random gradient descent (SGD), involve momentum and adaptive learning rates, and improve model convergence speed.
Illustratively, in the fourth step, the BiLSTM model uses a cross entropy loss function as the loss function in training.
Specifically, the cross entropy loss function is used to calculate the cross entropy between the recognition result of BiLSTM model and the real label, and is a common loss function for two or more classification problems. The formula is as follows:
Wherein, The one-hot encoding representing the authentic tag,The probability distribution of the recognition result is represented, i representing the ith recognition result.
In one possible embodiment, the oral cavity curved surface tomogram dataset is divided into a training set, a verification set and a test set according to the ratio of 6:2:2, after 200 training iterations, the test set is tested by adopting a trained deep learning contour segmentation model and a trained deep learning sequence classification model, the accuracy rate is 79.03%, the precision rate is 78.60%, the recall rate is 78.17% and the F1 value is 78.15%.
According to the embodiment of the invention, the deep learning contour segmentation model and the deep learning sequence classification model are combined, the contour data features are converted into the sequence data, and then secondary recognition is performed, so that the recognition accuracy and the model processing efficiency are improved.
The proposed deep learning contour segmentation model adopts a Mask-RCNN model, so that the precision of contour segmentation is improved.
In the process of training the deep learning contour segmentation model by adopting the oral cavity curved surface fracture slice data set, the learning rate is adjusted according to the preset step length and the attenuation factor, so that the model training convergence speed is improved.
The contour feature coordinate data, namely the sequence data, is obtained by carrying out coordinate deconstruction on the tooth position and periapical periodontitis contour data, and the image recognition is converted into the sequence recognition, so that the accuracy and the efficiency of the subsequent recognition are improved.
By carrying out standardized processing on the sequence data, the dimensional difference among different features is eliminated, so that the influence of each feature on the result is more balanced.
The proposed deep learning sequence classification model adopts BiLSTM model, which can better capture the context relation in the sequence data, thereby improving the recognition accuracy.
Model weight optimization is carried out by adopting an Adam optimizer, and model convergence speed is improved by momentum and self-adaptive learning rate.
While preferred embodiments of the present invention have been described, additional variations and modifications in those embodiments may occur to those skilled in the art once they learn of the basic inventive concepts. It is therefore intended that the following claims be interpreted as including the preferred embodiments and all such alterations and modifications as fall within the scope of the invention.
It will be apparent to those skilled in the art that various modifications and variations can be made to the present invention without departing from the spirit or scope of the invention. Thus, it is intended that the present invention also include such modifications and alterations insofar as they come within the scope of the appended claims or the equivalents thereof.

Claims (5)

1. The method for identifying the tooth position and periapical periodontitis of the oral cavity curved surface fracture layer based on deep learning is characterized by comprising the following steps of:
step one, establishing an oral cavity curved surface broken layer sheet data set;
Establishing a deep learning contour segmentation model, and training the deep learning contour segmentation model by adopting the oral cavity curved surface fracture slice data set to obtain a trained deep learning contour segmentation model;
Inputting the data set of the oral cavity curved surface fracture slice into a trained deep learning contour segmentation model, outputting contour data of tooth positions and periapical periodontitis, and converting the contour data characteristics into sequence data to obtain a sequence data set;
Step four, establishing a deep learning sequence classification model, and training the deep learning sequence classification model by adopting the sequence data set to obtain a trained deep learning sequence classification model;
Fifthly, carrying out actual tooth position and periapical periodontitis identification by adopting a trained deep learning contour segmentation model and a trained deep learning sequence classification model;
in the third step, the outputting the outline data of the dental site and the periapical periodontitis, and converting the outline data feature into the sequence data comprises:
Coordinate deconstructing is carried out on the tooth position and periapical periodontitis outline data to obtain outline characteristic coordinate data, namely the sequence data;
In the fourth step, biLSTM models are adopted as the deep learning sequence classification models;
the BiLSTM model consists of two LSTM models, including a forward model and a reverse model, wherein the forward model reads the input sequence data according to time sequence, and the reverse model reads the input sequence data according to reverse sequence;
the LSTM model contains three gate control structures: forget gate, input gate and output gate;
the calculation formula of the forgetting gate is as follows:
Wherein, Representing the output of the forgetting gate,The activation function representing the sigmoid is presented,A weight matrix representing the forgetting gate,Indicating the hidden state of the last time step,An input representing the current time step is presented,A bias term representing a forget gate;
The input gate is calculated as follows:
Wherein, The sigmoid activation portion representing the input gate,Representing candidates for the input gate, the candidates being obtained by a tanh activation function,AndThe weight matrices representing the input gates and candidate values respectively,AndWeight matrix bias terms respectively representing input gates and candidate values;
The output gate is calculated as follows:
Wherein, Representing the output of the output gate,Representing the weight matrix of the output gate,Representing the bias term of the output gate,Representing the state of the cell at the current time step,Representing the hidden state of the current time step;
in the fourth step, the BiLSTM model adopts an Adam optimizer to optimize the model weight during training;
In the fourth step, the BiLSTM model adopts a cross entropy loss function as the loss function during training;
The cross entropy loss function is used for calculating the cross entropy between the identification result of BiLSTM model and the real label, and the formula is as follows:
Wherein, The one-hot encoding representing the authentic tag,The probability distribution of the recognition result is represented, i representing the ith recognition result.
2. The method for identifying tooth positions and periapical periodontitis of an oral cavity curved surface fracture based on deep learning according to claim 1, wherein the step one comprises:
collecting oral cavity curved surface tomogram data;
Classifying and marking the oral cavity curved surface tomogram data to obtain an oral cavity curved surface tomogram data set.
3. The recognition method of tooth position and periapical periodontitis of an oral cavity curved surface fracture layer based on deep learning according to claim 1, wherein in the second step, a Mask-RCNN model is adopted as the deep learning contour segmentation model.
4. The method for identifying tooth positions and periapical periodontitis of an oral cavity curved surface fracture layer based on deep learning according to claim 1, wherein in the second step, in the process of training the deep learning contour segmentation model by adopting the oral cavity curved surface fracture layer data set, the learning rate is adjusted according to a preset step size and an attenuation factor.
5. The method for identifying tooth positions and periapical periodontitis of an oral cavity curved surface fracture based on deep learning according to claim 1, wherein in the third step, contour data of tooth positions and periapical periodontitis is output, and after the contour data features are converted into sequence data, the sequence data is subjected to standardization processing.
CN202410766284.9A 2024-06-14 2024-06-14 Deep learning-based recognition method for tooth position and periapical periodontitis of oral cavity curved surface fracture layer Active CN118365964B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202410766284.9A CN118365964B (en) 2024-06-14 2024-06-14 Deep learning-based recognition method for tooth position and periapical periodontitis of oral cavity curved surface fracture layer

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202410766284.9A CN118365964B (en) 2024-06-14 2024-06-14 Deep learning-based recognition method for tooth position and periapical periodontitis of oral cavity curved surface fracture layer

Publications (2)

Publication Number Publication Date
CN118365964A CN118365964A (en) 2024-07-19
CN118365964B true CN118365964B (en) 2024-09-13

Family

ID=91885542

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202410766284.9A Active CN118365964B (en) 2024-06-14 2024-06-14 Deep learning-based recognition method for tooth position and periapical periodontitis of oral cavity curved surface fracture layer

Country Status (1)

Country Link
CN (1) CN118365964B (en)

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114066804A (en) * 2021-09-24 2022-02-18 北京交通大学 Curved surface fault layer tooth position identification method based on deep learning
CN115240227A (en) * 2022-08-11 2022-10-25 成都仁恒美光科技有限公司 Oral panoramic film permanent tooth segmentation and tooth position identification method based on deep learning

Family Cites Families (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105488849B (en) * 2015-11-24 2018-01-12 嘉兴学院 A kind of three-dimensional tooth modeling method based on mixed-level collection
CN110473243B (en) * 2019-08-09 2021-11-30 重庆邮电大学 Tooth segmentation method and device based on depth contour perception and computer equipment
CN111798456A (en) * 2020-05-26 2020-10-20 苏宁云计算有限公司 Instance segmentation model training method and device and instance segmentation method
KR102407531B1 (en) * 2020-08-05 2022-06-10 주식회사 라온메디 Apparatus and method for tooth segmentation
KR20230134887A (en) * 2022-03-15 2023-09-22 부경대학교 산학협력단 System and Method for recognizing tartar using artificial intelligence
CN115953583B (en) * 2023-03-15 2023-06-20 山东大学 Tooth segmentation method and system based on iterative boundary optimization and deep learning

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114066804A (en) * 2021-09-24 2022-02-18 北京交通大学 Curved surface fault layer tooth position identification method based on deep learning
CN115240227A (en) * 2022-08-11 2022-10-25 成都仁恒美光科技有限公司 Oral panoramic film permanent tooth segmentation and tooth position identification method based on deep learning

Also Published As

Publication number Publication date
CN118365964A (en) 2024-07-19

Similar Documents

Publication Publication Date Title
CN108805134B (en) Construction method and application of aortic dissection model
CN112365464B (en) GAN-based medical image lesion area weak supervision positioning method
CN108090830B (en) Credit risk rating method and device based on facial portrait
CN113763340B (en) Automatic grading method based on multitask deep learning ankylosing spondylitis
CN110503635B (en) Hand bone X-ray film bone age assessment method based on heterogeneous data fusion network
CN113469987A (en) Dental X-ray image lesion area positioning system based on deep learning
CN111862075A (en) Lung image analysis system and method based on deep learning
CN112770676B (en) Automatic identification method for measuring points in skull image
CN109815979A (en) A kind of weak label semantic segmentation nominal data generation method and system
CN109685765A (en) A kind of X-ray pneumonia prediction of result device based on convolutional neural networks
CN111192248A (en) Multi-task relation learning method for positioning, identifying and segmenting vertebral body in nuclear magnetic resonance imaging
CN116188879B (en) Image classification and image classification model training method, device, equipment and medium
CN113168537A (en) Segmentation of deep neural networks
CN115393351B (en) Method and device for judging cornea immune state based on Langerhans cells
CN116504392A (en) Intelligent auxiliary diagnosis prompt system based on data analysis
CN118365964B (en) Deep learning-based recognition method for tooth position and periapical periodontitis of oral cavity curved surface fracture layer
CN113822921B (en) Side film intelligent head shadow measuring method based on deep neural network
CN111144462A (en) Unknown individual identification method and device for radar signals
CN115187566A (en) Intracranial aneurysm detection method and device based on MRA image
CN113948190A (en) Method and equipment for automatically identifying X-ray skull positive position film cephalogram measurement mark points
CN118521788A (en) CBCT image cavity region segmentation method and system based on deep learning
CN109190489A (en) A kind of abnormal face detecting method based on reparation autocoder residual error
CN113688942A (en) Method and device for automatically evaluating cephalic and lateral adenoid body images based on deep learning
CN117522891A (en) 3D medical image segmentation system and method
CN116704036A (en) X-ray head shadow measurement mark point automatic positioning method based on deep learning

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant