- Research
- Open access
- Published:
Oral screening of dental calculus, gingivitis and dental caries through segmentation on intraoral photographic images using deep learning
BMC Oral Health volume 24, Article number: 1287 (2024)
Abstract
Objective
Intraoral photographic images are instrumental in the early screening and clinical diagnosis of oral diseases. In addition, people have been trying to apply artificial intelligence to these images. The purpose of this study is to investigate and evaluate a deep learning system designed to segment intraoral photographic images for the detection of dental caries, dental calculus, and gingivitis, and to assess the degree of dental calculus based on the overall features of the tooth surface and gingival margin.
Material and methods
This cross-sectional study collected 3,365 oral endoscopic images, randomly distributed in training datasets (2,019 images), validation dataset (673 images), and test dataset (673 images). The training set and verification set images are manually labeled. An oral endoscopic image segmentation method based on Mamba (Oral-Mamba) and an intelligent evaluation model of dental calculus degree were proposed, achieving the segmentation of two types of oral diseases, namely gingivitis and dental caries, as well as the segmentation of dental calculus regions, and the intelligent evaluation of the degree of dental calculus.
Results
Oral-Mamba demonstrated high accuracy in segmentation, with accuracy rates for gingivitis, dental caries, and dental calculus at 0.83, 0.83, and 0.81, respectively. In particular, these rates surpassed those of the U-Net model in IoU, accuracy, and recall metrics. Furthermore, Oral-Mamba runs 25% faster than U-Net.The accuracy of degree classification in the intelligent evaluation model of dental calculus degree is 85%.
Conclusion
The proposed deep learning system is expected to be used for the detection of two types of oral diseases and dental calculus, and the degree judgment of photographic images from an intraoral camera. This system offers a practical method to assist in the oral screening of dental caries, dental calculus, and gingivitis, providing benefits such as intuitive use, time efficiency, cost-effectiveness, and ease of deployment.
Introduction
With urbanization and changes in living environments, the global prevalence of oral diseases is increasing, mainly driven by modifiable factors such as sugar consumption, tobacco and alcohol use, and personal hygiene practices. According to the 2022 Global State of Oral Health Report [1], nearly 3.5 billion people worldwide suffer from oral diseases, three-quarters of them in middle-income countries. Although most oral problems can be prevented and treated early, the shortage of oral doctors and the imbalance of medical resources aggravate the problem of patients’ difficulty in accessing medical services, and even lead to the deterioration of the disease, increase the cost of treatment, and even endanger life.
The advent of oral endoscopy technology has opened up new possibilities for oral medicine. By providing high-resolution images of the interior of the mouth, this technology allows more precise observation of lesions and abnormalities, and is widely used in the diagnosis, treatment and evaluation of periodontal diseases [2,3,4,5,6,7,8]. Compared with traditional oral examination equipment, although the price of oral endoscopy is higher, it can avoid problems such as limited visual field and inaccurate positioning, improve diagnostic accuracy and treatment effect, and reduce complications and recovery time. With the continuous advancement of technology and the intensification of market competition, the price of endoscopes may gradually decrease. Recent studies have shown that the application of deep learning technology in combination with oral endoscopy in the diagnosis of dental caries [9, 10] has provided implications for the development of new standardized techniques, making early detection of oral diseases more convenient, providing long-term benefits to patients, and alleviating medical shortages, especially in remote areas.
With the continuous development of deep learning technology, its ability to automatically learn at a deep level and discriminating features from data has been applied to the field of medical image segmentation [11].Among fully supervised segmentation models, FCN based on end-to-end learning is a typical segmentation model [12]. The modified U-Net algorithm [13] based on FCN and its improved network structure methods such as 3D U-Net [14], Res-UNet [15], U-Net++ [16] and UNet3+ [17] are widely used because of their high precision in the segmentation task of tissues, organs, or lesions on medical images [18, 19]. In the oral area, U-Net has also been widely used, such as segmentation of the degree of dental caries lesions based on U-Net [20, 21]. However, U-Net also has some problems, which require a large number of data sets for training; otherwise, it is easy to generate overfitting and consume huge computing resources.
Recently, structured state space models (SSMs) has attracted wide attention, because the Mamba model proposed by Albert Gu [22] realizes the integration of RNN’s stepwise processing capability and CNN’s global information processing capability through the framework of SSMs, and introduces innovative selective mechanism and hardware sensing algorithm. Mamba has successfully solved the problem of computational efficiency when processing long sequence data, demonstrating excellent performance and efficiency when processing various sequence data, especially language, audio and genomics data.
Improved Mamba-based models have also been applied in the medical field. U-Mamba [23] proposed the hybrid model SSM-CNN for 3D abdominal organ segmentation on CT and MR images, instrument segmentation in endoscopic images, and cell segmentation in microscope images. SegMamba [24] combines the U-shaped structure with Mamba and is specially used in 3D medical image segmentation. Swin-UMamba [25] has shown tremendous superior performance on abdominal MRI, colonoscopy, and microscopy datasets. It can be said that Mamba has great potential in the medical field.
Therefore, this study aims to develop a system that incorporates an oral endoscopic image segmentation method based on Mamba (Oral-Mamba) and an intelligent evaluation model of dental calculus degree. However, since the deep learning model requires the support of a large amount of training data and currently there is no data set that can be applied to the segmentation of oral endoscopy images, we produced more than 3000 oral endoscopy image data sets containing label samples. The system has high precision and fast speed, including visual analysis report of the lesion area, which can help the patient to see the condition intuitively and carry out treatment as early as possible.
Methods
Data and annotation
All oral endoscopy image data collected in this study was collected by the network and screened by doctors in dental offices based on actual scenarios, with a total of 3365 oral endoscopy images collected.We verified the versatility of the deep learning method based on the similarity of oral endoscope images. Because it is an auxiliary screening, even if the population and sample, eligibility criteria, location, and dates are changed, our method still has generalization performance. The sample displayed at the top of Fig. 1 is part of a collection of images containing gingivitis, dental caries, or dental calculus. It is important to note that the image data used in this study are all public or have been authorized by patients, and all are anonymous and do not contain patient information.
We annotated our dataset using an interactive semi-automatic image segmentation annotation tool based on Segment Anything Model(SAM). Since our research is to judge gingivitis, dental caries and dental calculus, our labeling work is to outline the contour of the lesion area of these three types with points in the image, and the final mask will form a connected domain according to these points, as shown in (bottom of Fig. 1) (The dental calculus marked in the pictures of this study refers to supragingival calculus).In addition, we will also add a degree label to the data with dental calculus, and the basis for degree judgment is: Degree 0: No soft scale and dental calculus; Degree 1: A little soft scale or dental calculus, but not more than l/3 of the tooth surface; Degree 2: There are dental calculus, more than 1/3 of the crown, there are a small number of subgingival calculus; Degree 3: The dental calculus does not exceed 2/3 of the crown; there is more subgingival calculus. The dental calculus degree label is used to train the intelligent evaluation model of dental calculus degree. The labeling work is carried out by doctors with rich clinical experience in the dental clinic together with us, and finally all the labeling is reviewed and approved by a review team composed of doctors. Prior to the annotation and review process, each clinician and reviewer was instructed and calibrated to segmentation tasks using standardized protocols.
Deep learning method
The system we studied is divided into two phases: Oral-Mamba and the intelligent evaluation model of dental calculus degree. Among them, Oral-Mamba is used to detect gingivitis, dental caries and dental calculus.And the intelligent evaluation model of dental calculus degree is used to judge the degree of dental calculus.
Oral-Mamba
Our study of Oral-Mamba (Fig. 2) is improved based on the structure of the U-Net network, which passes through a Mamba block in the bottleneck (Fig. 3a).
U-Net is used to obtain contextual and location information. U-Net itself is a very simple structure and is composed of two parts, the first half of the downsampling for feature extraction and the second half of the feature upsampling, called the encoder-decoder structure. The encoder gradually reduces the spatial dimension of the pooling layer, and the decoder gradually repairs the detail and spatial dimension of the object. There is usually a skip connection between the encoder and the decoder, so it helps the decoder better repair the details of the target. U-Net integrates the characteristics of higher and lower features, which helps us to identify and locate gingivitis, dental caries or dental calculus.
The Mamba block in the Oral-Mamba network structure (Fig. 3a) is the core module of the Oral-Mamba network structure, which contains Selective State Space Models. First, let us introduce State Space Model(SSM). SSM is the most recent class of sequential models for deep learning. It is inspired by a special continuous system for describing hidden state representations and making next state predictions based on some input.
Where \(A\in R^{N\times N}\), B, \(C \in R^{N}\), N is the number of variables in the state space.
Matrix A describes how all the internal states are connected because they represent the underlying representation of the system. It is initialized on the basis of HiPPO theory. HiPPO’s model combines the concepts of Recurrent Memory and Optimal Polynomial Projections. This projection technique can significantly improve the performance of Recurrent Memory and solve the problem of remote dependence in processing long sequence data. By visualizing equations (1) and (2), we can obtain the architecture as shown in Fig. 3b.
Since the oral endoscopy photos are discrete inputs, we also discretize the model using zero-order holding techniques:
After discretization, the calculation method is selected according to the task. In the training process, we use a convolution representation that can be parallelized, and in the inference process, we use an efficient circular representation, which can achieve efficient calculation of linear complexity:
In order to make the model filter out irrelevant information and remember relevant information indefinitely, and have the characteristics of SSM, Mamba modifies the SSM by using Selective SSM, which selectively processes information and includes the entire history information, creating a very effective small state. At the same time, the hardware sensing algorithm is introduced to further improve the computing efficiency.
In the Mamba block, the input data is first divided into two branches. In the first branch, the data goes through a linear layer for a basic transformation and is immediately passed to the SiLU activation function. SiLU is a smooth non-linear function that adaptively adjusts the activation level based on the input, which helps the model capture more complex feature relationships. At the same time, the data in the second branch also passes through a linear layer and then into a deeply separable convolution layer that is specifically designed to efficiently learn features at the spatial level. The output of the depth-separable convolution is further enhanced by the SiLU activation function and finally fed into a Selective state space models. The output of the two branches is then combined by a multiplication operation, which is designed to synthesize the information in the two branches while preserving each’s unique contribution. Finally, a linear layer is used to reconcile the output data of these features.
Intelligent evaluation model of dental calculus degree
From the perspective of clinical diagnosis, the degree of dental calculus is assessed based on the adhesion of the dental calculus to the surface of the tooth and the gingival margin. Therefore, we designed an intelligent evaluation algorithm for the degree of dental calculus based on the adhesion of dental calculus to the tooth surface and the gingival margin (Fig. 4).Our intelligent evaluation model of dental calculus degree is designed based on the structure of the CNN network. The area of the lesion marked as dental calculus after Oral-Mamba will continue to judge the degree of dental calculus in the next stage using the intelligent evaluation algorithm of the degree of dental calculus. We used Segment Anything Model (SAM) [26], a pioneering basic model with rapid segmentation that has recently gained widespread attention, to automatically label the tooth surface and gingival margin of the original image of dental calculus.Then, the image with dental calculus mask and the image with tooth surface and gingival margin mask are fused as original data and entered into the intelligent evaluation model of dental calculus degree. After extracting features through convolution layer, activation function and pooling layer, the full connection layer can classify the degree of dental calculus, and finally a probability distribution diagram of 0,1, 2 and 3 degrees of dental calculus will be obtained. The degree of dental calculus with the highest probability is the final output result.
We feed the image into the network. Firstly, medical image segmentation was performed by Oral-Mamba, and gingivitis, dental caries or dental calculus lesion areas were detected. The degree of dental calculus was then intelligibly diagnosed in the images with dental calculus lesions to determine the degree of dental calculus. Finally, visual results of gingivitis,dental caries and dental calculus lesion areas and the degree of dental calculus were output (Fig. 5).
Training strategy of deep learning model
We randomly assigned the 3365 acquired oral endoscopy images into 3 subsets, with a 6: 2: 2 ratio. 60% of the data set is used for training, 20% for validation, and the remaining 20% for testing. So, the training set contains 2019 images, the verification set contains 673 images, and the test set contains 673 images. For training, K-fold cross-validation is applied and learned by sequentially replacing individual verification sets.
For each training, the network was trained on 400 epochs, optimized using stochastic gradient descent with an initial learning rate of 1e-2. During training, we adopted the combination of Dice loss and cross entropy to segment and classify areas of gingivitis, dental caries, or dental calculus lesion, which can provide better performance in various tasks and scenarios [27]. The loss function is defined as:
The model was trained using NVIDIA RTX4060Ti GPU and 14th Gen Intel(R) Core(TM) i7-14700K CPU, and deep learning frameworks PyTorch 1.11, Cuda 11.4, and cuDNN 8.2.
Statistical analysis
In order to evaluate the overall performance of the Oral-Mamba network architecture we designed, we evaluated the results of the test set and the real results labeled by the physician in the model prediction. Meanwhile, we use U-Net, a popular medical image segmentation network, to compare its segmentation performance with that of Oral-Mamba. U-Net is one of the popular deep networks for medical image analysis. It consists of an encoder path with five levels for capturing context, and a symmetric decoder path for restoring image resolution to the input image resolution. U-Net has about 7.7 million trainable parameters.
The metrics commonly used for evaluation include precision, accuracy, recall, and IoU, all of which are pixel-level comparisons that provide a comprehensive evaluation of the segmentation results.
-
1.
Intersection over Union (IoU):
The IoU measures the degree of overlap between the predicted segmentation results and the real labels. It is defined by calculating the ratio of the intersection region between the predicted segmentation result and the real label to their union region. The closer the IoU is to 1, the higher the degree of overlap between the predicted result and the real label, and the better the segmentation effect. IoU is calculated as follows:
$$\begin{aligned} IoU=\frac{TP}{TP+FP+FN} \end{aligned}$$(8) -
2.
Recall:
The recall measures the proportion of true positive cases that are correctly predicted. In image segmentation, the recall represents the ratio of predicted positive example pixels to real positive example pixels, and a high recall means that the algorithm can capture more real positive examples to avoid omissions. Its calculation formula is as follows:
$$\begin{aligned} Recall=\frac{TP}{TP+FN} \end{aligned}$$(9) -
3.
Precision:
Precision measures the proportion of all predicted positive examples that are actually positive. In image segmentation, precision represents the ratio between real example pixels and all pixels predicted as positive examples. The high-precision representation algorithm can accurately identify positive example pixels and reduce misrecognition. Its calculation formula is as follows:
$$\begin{aligned} Precision=\frac{TP}{TP+FP} \end{aligned}$$(10) -
4.
Accuracy:
Accuracy measures the ratio of all correctly classified pixels to the total number of pixels. In image segmentation, accuracy represents the ratio between the number of correctly classified pixels and the total number of pixels. Its calculation formula is as follows:
$$\begin{aligned} Accuracy=\frac{TP+TN}{TP+TN+FP+FN} \end{aligned}$$(11)
Among them, TP is the number of pixels of the true example (correctly classified as a positive example), FP is the number of pixels of the false positive example (incorrectly classified as a positive example), FN is the number of pixels of the false negative example (incorrectly classified as a negative example), and TN is the number of pixels of the true negative example (correctly classified as a negative example).
Result
Segmentation performance of Oral-Mamba
The segmentation performance of Oral-Mamba and U-Net is compared by drawing four tables. Tables 1, 2, 3, and 4 respectively show the quantitative results of U-Net and Oral-Mamba network models on the segmentation performance of gingivitis, dental caries and dental calculus.
From these results, we found that Oral-Mamba showed high performance in the segmentation of gingivitis, dental caries and dental calculus.Among them, Oral-Mamba showed the best improvement in the segmentation of dental caries, IoU increased from 0.59 to 0.71, recall increased from 0.75 to 0.83, precision increased from 0.74 to 0.84 and accuracy increased from 0.75 to 0.83. However,the IoU of U-Net is no higher than 0.64, the recall is no higher than 0.81, the precision is lower than 0.80, and the accuracy is no higher than 0.81.It can be seen that, regardless of the segmentation performance metrics for gingivitis, dental caries, or dental calculus, Oral-Mamba is superior to U-Net segmentation performance.
The test results were qualitatively verified as shown in Fig. 6, where (second row of Fig. 6) was the ground truth(GT) marked by the dentist, and (third row of Fig. 6) was predicted by the model. It can be seen that Oral-Mamba can generate accurate segmentation masks for gingivitis, dental caries and dental calculus. Although the size of the predicted boundary may be slightly different in the case of TP than in the case of GT, the location of gingivitis, dental caries, and dental calculus can all be correctly detected and contains a large proportion of their areas correctly.
Accuracy of the intelligent evaluation model of dental calculus degree
The intelligent evaluation model of dental calculus degree is achieved based on the evaluation of the dental calculus attached to the dental surface and the gingival margin, resulting in a relatively accurate classification of the dental calculus into 0, 1, 2, and 3 degrees. The accuracy reaches a medium to high level, with an accuracy exceeding 85%. Figure 7 presents the probability diagram of our intelligent evaluation model of dental calculus degree in some images.
Discussion
Easy to operate with digital storage and the ability to capture high-quality images, oral endoscope is introduced into oral diseases detection, lowering the threshold and helping to prevent the deterioration of oral diseases.Therefore, it has been widely used in stomatology, as a visual inspection tool for dental caries detection [28, 29], applied in periodontal treatment, and can also help diagnose suspected subgingival diseases such as dental caries and root cleft.The development of artificial intelligence has prompted people to combine deep learning technology with medical equipment to alleviate the shortage of medical resources. At the same time, artificial intelligence is used as a tool for the preliminary judgment of patient conditions, which can provide patients with low-cost, convenient, and professional advice, reducing the economic burden and mental pressure of patients. The rapid advancement of smart diagnosis and treatment will inevitably bring win-win results to doctors and patients.
At present, deep learning-represented artificial intelligence technology has been widely used in the field of stomatology, has achieved significant results, and is a reliable and standardized assistant to help doctors in the diagnosis and treatment of oral diseases. In addition, it can provide other benefits, such as speed, efficiency, and cost reduction. In particular, the U-Net framework has achieved excellent results in various medical imaging tasks, such as segmentation of the degree of dental caries based on U-Net. Li et al. [30] use deep learning methods to identify teeth such as incisors, canines, premolars, and molars.However, in the process of using deep learning to diagnose oral and dental problems, many obstacles have been found: since most of the diagnoses are made with panoramic X-rays, the examination is expensive;U-Net requires a large number of data sets for training and consumes huge computational resources, but there is a lack of high-quality labeled oral image data sets. The current shortcomings of these models forced us to build high-quality datasets and develop a novel medical image segmentation architecture.
Therefore, we propose a competitive system, one that incorporates Oral-Mamba and an intelligent evaluation model of dental calculus degree, to solve the problem of capturing powerful telematics and maintaining linear computational complexity.Specifically, the algorithm has local capture capabilities and efficient remote modeling capabilities, making it more efficient in terms of GPU memory and inference time for high-resolution images. We used Oral-Mamba for medical image segmentation of intraoral photographic images to detect gingivitis, dental caries and dental calculus. We proposed an intelligent evaluation model of dental calculus degree to judge the degree of dental calculus based on the adhesion of dental calculus to the tooth surface and gingival margin, combined the domain small model with the SAM large model, combined with clinical experience to judge the degree of calculus. In addition, we produced high-quality data sets for the model training. The system can quickly and efficiently give the analysis report of the visual lesion area of the patient’s dental calculus, gingivitis, and dental caries, which can help the patient see the diseases intuitively and carry out treatment as early as possible.
The segmentation IoU score of Oral-Mamba ranges from 0.64 to 0.71, the precision is near 0.80, and recall and accuracy are higher than 0.80, which is better than the segmentation effect using U-Net.Among them, the segmentation accuracy of dental caries was the highest, and the segmentation accuracy of dental calculus was the lowest. In terms of training speed, Oral-Mamba runs 25% faster than U-Net. Based on the adhesion of dental calculus to the tooth surface and gingival margin, the intelligent evaluation model of dental calculus degree is excellent, reaching up to 85%.
Although our system has achieved some success in terms of speed and accuracy, there are still some limitations. The first is the limitation of the oral endoscope itself, which can only be used to inspect the surface of the tooth. The lesions inside the tooth including the pulp, dentin, pulp cavity, and other structures cannot be directly observed by the oral endoscope, and more comprehensive information needs to be obtained by X-ray, CT scan, and other imaging techniques.In addition, the effect of the oral endoscope is easily affected by light, and the quality and direction of light will affect the imaging quality and clarity of the oral endoscope, affecting our model’s judgment of gingivitis, and so on. In addition, teeth with serious tea stains and smoke stains will also affect the judgment of dental calculus.
In any case, this study is the first system to merge Mamba and U-Net for the segmentation of gingivitis, dental caries, and dental calculus regions, solving the problem of capturing powerful telematics and maintaining linear computational complexity. It also performs the classification of the degree of dental calculus. The diagnosis may vary depending on the experience and skill of the clinician, but the system will not, it will be a visual aid diagnostic system that includes the diagnostic opinions of professional doctors.Applying this system in dental practice can alleviate the shortage of medical resources, allow patients to undergo early oral screening more conveniently and cheaply, and enable patients to understand their condition more intuitively and earlier, thus benefiting patients.
Conclusion
The purpose of this study is to solve the problem that the shortage of dentists and the uneven distribution of medical resources aggravate the difficulty of seeing a doctor and the problem of early prevention of oral diseases. To this end, we propose a system and make an oral endoscopic data set for the training model to achieve accurate segmentation of gingivitis and dental caries with segmentation accuracy ranging from 81% to 83%. The degree of dental calculus was also classified with 85% accuracy. This provides a useful idea for combining deep learning with oral endoscopy technology for the early prevention and treatment of oral diseases.
Data availability
The datasets used and/or analyzed during the current study are available from the corresponding author on reasonable request.Here is our dataset link: https://drive.google.com/drive/folders/1vo_qv3EF9eG4Q2dPvtb_rXb4ttBkYJq_?usp=sharing.
References
World Health Organization, et al. Global oral health status report: towards universal health coverage for oral health by 2030. Regional summary of the African Region. World Health Organization; 2023.
Wilson TG Jr. The positive relationship between excess cement and peri-implant disease: a prospective clinical endoscopic study. J Periodontol. 2009;80(9):1388–92.
Forgie A, Pine C, Pitts N. The assessment of an intra-oral video camera as an aid to occlusal caries detection. Int Dent J. 2003;53(1):3–6.
Erten H, Uçtasli MB, Akarslan ZZ, Uzun O, Baspinar E. The assessment of unaided visual examination, intraoral camera and operating microscope for the detection of occlusal caries lesions. Oper Dent. 2005;30(2):190–4.
Park EY, Cho H, Kang S, Jeong S, Kim EK. Caries detection with tooth surface segmentation on intraoral photographic images using deep learning. BMC Oral Health. 2022;22(1):573.
Vinayahalingam S, Kempers S, Schoep J, Hsu TMH, Moin DA, van Ginneken B, et al. Intra-oral scan segmentation using deep learning. BMC Oral Health. 2023;23(1):643.
Albano D, Galiano V, Basile M, Di Luca F, Gitto S, Messina C, et al. Artificial intelligence for radiographic imaging detection of caries lesions: a systematic review. BMC Oral Health. 2024;24(1):274.
Li X, Zhao D, Xie J, Wen H, Liu C, Li Y, et al. Deep learning for classifying the stages of periodontitis on dental images: a systematic review and meta-analysis. BMC Oral Health. 2023;23(1):1017.
Moutselos K, Berdouses E, Oulis C, Maglogiannis I. Recognizing occlusal caries in dental intraoral images using deep learning. In: 2019 41st Annual International Conference of the IEEE Engineering in Medicine and Biology Society (EMBC), Berlin, 2019, pp. 1617–20. https://doi.org/10.1109/EMBC.2019.8856553.
Askar H, Krois J, Rohrer C, Mertens S, Elhennawy K, Ottolenghi L, et al. Detecting white spot lesions on dental photography using deep learning: A pilot study. J Dent. 2021;107:103615.
Shi J, Wang L, Wang S, Chen Y, Wang Q, Wei D, et al. Applications of deep learning in medical imaging: a survey. J Image Graph. 2020;25(10):1953–81.
Long J, Shelhamer E, Darrell T. Fully convolutional networks for semantic segmentation. In: Proceedings of the IEEE conference on computer vision and pattern recognition. 2015. pp. 3431–40.
Ronneberger O, Fischer P, Brox T. U-net: Convolutional networks for biomedical image segmentation[C]//Medical image computing and computer-assisted intervention–MICCAI 2015: 18th international conference, Munich, Germany, October 5-9, 2015, proceedings, part III 18. Springer International Publishing; 2015. pp. 234–41.
Çiçek Ö, Abdulkadir A, Lienkamp SS, et al. 3D U-Net: learning dense volumetric segmentation from sparse annotation[C]//Medical Image Computing and Computer-Assisted Intervention–MICCAI 2016: 19th International Conference, Athens, Greece, October 17-21, 2016, Proceedings, Part II 19. Springer International Publishing; 2016. pp. 424–32.
Xiao X, Lian S, Luo Z, et al. Weighted res-unet for high-quality retina vessel segmentation[C]//2018 9th international conference on information technology in medicine and education (ITME). IEEE; 2018. pp. 327–31.
Zhou Z, Rahman Siddiquee MM, Tajbakhsh N, et al. Unet++: A nested u-net architecture for medical image segmentation[C]//Deep Learning in Medical Image Analysis and Multimodal Learning for Clinical Decision Support: 4th International Workshop, DLMIA 2018, and 8th International Workshop, ML-CDS 2018, Held in Conjunction with MICCAI 2018, Granada, Spain, September 20, 2018, Proceedings 4. Springer International Publishing; 2018. pp. 3–11.
Huang H, Lin L, Tong R, et al. Unet 3+: A full-scale connected unet for medical image segmentation[C]//ICASSP 2020-2020 IEEE international conference on acoustics, speech and signal processing (ICASSP). IEEE; 2020. pp. 1055–9.
Litjens G, Kooi T, Bejnordi BE, Setio AAA, Ciompi F, Ghafoorian M, et al. A survey on deep learning in medical image analysis. Med Image Anal. 2017;42:60–88.
Yi X, Walia E, Babyn P. Generative adversarial network in medical imaging: A review. Med Image Anal. 2019;58:101552.
Zhu H, Cao Z, Lian L, et al. CariesNet: a deep learning approach for segmentation of multi-stage caries lesion from oral panoramic X-ray image. Neural Comput Appl. 2023;35:16051–9. https://doi.org/10.1007/s00521-021-06684-2.
Lian L, Zhu T, Zhu F, Zhu H. Deep learning for caries detection and classification. Diagnostics. 2021;11(9):1672.
Gu A, Dao T. Mamba: Linear-time sequence modeling with selective state spaces. arXiv preprint arXiv:231200752. 2023.
Ma J, Li F, Wang B. U-mamba: Enhancing long-range dependency for biomedical image segmentation. arXiv preprint arXiv:240104722. 2024.
Xing Z, Ye T, Yang Y, Liu G, Zhu L. Segmamba: Long-range sequential modeling mamba for 3d medical image segmentation. arXiv preprint arXiv:240113560. 2024.
Liu J, et al. Swin-umamba: Mamba-based unet with imagenet-based pretraining. In: Linguraru MG, et al. Medical Image Computing and Computer Assisted Intervention – MICCAI 2024. MICCAI 2024. Lect Notes Comput Sci. 2024;15009. https://doi.org/10.1007/978-3-031-72114-4_59.
Kirillov A, Mintun E, Ravi N, Mao H, Rolland C, Gustafson L, et al. Segment anything. In: Proceedings of the IEEE/CVF International Conference on Computer Vision. 2023. pp. 4015–26.
Ma J, Chen J, Ng M, Huang R, Li Y, Li C, et al. Loss odyssey in medical image segmentation. Med Image Anal. 2021;71:102035.
Pentapati KC, Siddiq H. Clinical applications of intraoral camera to increase patient compliance-current perspectives. Clin Cosmet Investig Dent. 2019;11:267–78. https://doi.org/10.2147/CCIDE.S192847.
Snyder T. The intraoral camera: a popular computerized tool. J Am Dent Assoc. 1995;126:177–8.
Li Z, Wang SH, Fan RR, Cao G, Zhang YD, Guo T. Teeth category classification via seven-layer deep convolutional neural network with max pooling and global average pooling. Int J Imaging Syst Technol. 2019;29(4):577–83.
Clinical trial number
Not applicable.
Funding
This research was supported by the Startup Foundation for Introducing Talent of NUIST (No. 2023r124, No. 2024r072) and Enterprise Cooperation Project (No. 2023h852).
Author information
Authors and Affiliations
Contributions
Y.L. and Y.C. designed the system for this study, participated in the literature search, code design, and drafted the manuscript. Y.S. was responsible for the collection of data sets, communication with dental clinicians, and statistical analysis. D.C. participated in code debugging and statistical analysis, and revised the manuscript. Y.L. and N.Z. provided the funding. All authors carefully revised and approved the final manuscript.
Corresponding author
Ethics declarations
Ethics approval and consent to participate
All intraoral camera image data collected were from publicly available images on the Internet. Link: https://www.kaggle.com/datasets/salmansajid05/oral-diseases. According to the requirements of Article 32 of the national regulation named “Measures for Ethical Review of Life Sciences and Medical Research Involving Humans (2023)” (Link: https://www.gov.cn/zhengce/zhengceku/2023-02/28/content_5743658.htm) of the National Health Commission, the Ministry of Education, the Ministry of Science and Technology, and the National Administration of Traditional Chinese Medicine, research involving human life sciences and medical research using legally obtained public data, or conducting research by observing data generated without interfering with public behavior, can be exempted from ethical review. Ethical approval for this study was waived.
Consent for publication
Not applicable.
Competing interests
The authors declare no competing interests.
Additional information
Publisher’s Note
Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
Rights and permissions
Open Access This article is licensed under a Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International License, which permits any non-commercial use, sharing, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if you modified the licensed material. You do not have permission under this licence to share adapted material derived from this article or parts of it. The images or other third party material in this article are included in the article’s Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article’s Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by-nc-nd/4.0/.
About this article
Cite this article
Liu, Y., Cheng, Y., Song, Y. et al. Oral screening of dental calculus, gingivitis and dental caries through segmentation on intraoral photographic images using deep learning. BMC Oral Health 24, 1287 (2024). https://doi.org/10.1186/s12903-024-05072-1
Received:
Accepted:
Published:
DOI: https://doi.org/10.1186/s12903-024-05072-1