Generic selectors
Exact matches only
Search in title
Search in content
Post Type Selectors
Filter by Categories
Book Review
Case Report
Case Series
Clinical Article
Clinical Innovation
Clinical Pearl
Clinical Pearls
Clinical Showcase
Clinical Technique
Critical Review
Editorial
Expert Corner
Experts Corner
Featured Case Report
Guest Editorial
Letter to Editor
Media and News
Orginal Article
Original Article
Original Research
Research Gallery
Review Article
Special Article
Special Feature
Systematic Review
Systematic Review and Meta-analysis
The Experts Corner
Generic selectors
Exact matches only
Search in title
Search in content
Post Type Selectors
Filter by Categories
Book Review
Case Report
Case Series
Clinical Article
Clinical Innovation
Clinical Pearl
Clinical Pearls
Clinical Showcase
Clinical Technique
Critical Review
Editorial
Expert Corner
Experts Corner
Featured Case Report
Guest Editorial
Letter to Editor
Media and News
Orginal Article
Original Article
Original Research
Research Gallery
Review Article
Special Article
Special Feature
Systematic Review
Systematic Review and Meta-analysis
The Experts Corner
Generic selectors
Exact matches only
Search in title
Search in content
Post Type Selectors
Filter by Categories
Book Review
Case Report
Case Series
Clinical Article
Clinical Innovation
Clinical Pearl
Clinical Pearls
Clinical Showcase
Clinical Technique
Critical Review
Editorial
Expert Corner
Experts Corner
Featured Case Report
Guest Editorial
Letter to Editor
Media and News
Orginal Article
Original Article
Original Research
Research Gallery
Review Article
Special Article
Special Feature
Systematic Review
Systematic Review and Meta-analysis
The Experts Corner
View/Download PDF

Translate this page into:

Original Article
ARTICLE IN PRESS
doi:
10.25259/APOS_153_2024

An artificial intelligence-based screening tool for orthognathic surgery using MKG angle in lateral cephalograms

Department of Advanced General Dentistry, Faculty of Dentistry, Mahidol University, Bangkok, Thailand.
Department of General Dentistry, Faculty of Dentistry, Srinakharinwirot University, Bangkok, Thailand.
Author image

*Corresponding author: Sasipa Thiradilok, Department of Advanced General Dentistry, Faculty of Dentistry, Mahidol University, Bangkok, Thailand. sasipa.thi@mahidol.ac.th

Licence
This is an open-access article distributed under the terms of the Creative Commons Attribution-Non Commercial-Share Alike 4.0 License, which allows others to remix, transform, and build upon the work non-commercially, as long as the author is credited and the new creations are licensed under the identical terms.

How to cite this article: Limcharoen N, Thiradilok S, Thanathornwong B, Manopatanakul S. An artificial intelligence-based screening tool for orthognathic surgery using MKG angle in lateral cephalograms. APOS Trends Orthod. doi: 10.25259/APOS_153_2024

Abstract

Objectives:

This study aimed to develop a clinical decision support system utilizing the MKG angle – derived from points M, K, and G – as a novel neural network parameter for evaluating sagittal maxillo-mandibular discrepancy. This system serves as a pre-operative screening tool for predicting the need for orthognathic surgery.

Material and Methods:

This retrospective study collected 494 digital lateral cephalograms. MKG angle values extracted from these cephalograms were analyzed using a Keypoint Region-based Convolutional Neural Network integrated with Detectron2 for object detection. Analysis was conducted using Keras software to facilitate decision-making regarding orthognathic surgery. The model’s output ranged from 0 to 1, with values closer to 1 indicating a stronger recommendation for orthognathic surgery. A training loss graph was used to monitor the model’s performance over epochs, while a confusion matrix evaluated the model’s accuracy and predictive capabilities.

Results:

The training loss value for the object detection model was 3.0510. Model performance was further evaluated using metrics such as root mean square error (RMSE) and percentage of detected joints (PDJ). The RMSE was measured at 2.68 pixels, while the PDJ, with a threshold of 0.05, achieved a value of 0.99, indicating a high level of accuracy. The developed system achieved an orthognathic surgery diagnosis accuracy of 70.41%, with a training loss value of 0.6163. The evaluation revealed instances of misdiagnosis; out of 98 cases, 29 were identified as misdiagnosed through a confusion matrix. The model’s sensitivity and specificity were measured at 72.5% and 68.97%, respectively.

Conclusion:

A supplementary tool for orthognathic screening, utilizing two-dimensional digital lateral cephalometry images and MKG angle as a parameter, was developed by merging a neural network model with clinical decision-making.

Keywords

Cephalometry
MKG angle
Orthognathic surgery
Artificial intelligence
Neural network

INTRODUCTION

Malocclusion and dentofacial deformities are significant dental problems that impact both esthetics and function. The goal of orthodontics is to prevent, intercept, and treat malocclusion and dentofacial abnormalities.[1] In recent years, an increasing number of patients have been undergoing orthognathic surgery to reposition the jaw. Orthognathic surgery is necessary to correct severe occlusal discrepancies and dentofacial deformities that cannot be adequately addressed by orthodontic treatment alone. The predictability of orthognathic surgery is crucial for treatment planning and obtaining informed consent, and it must, therefore, be discussed with the patient before treatment.[2]

Patients typically receive initial information and advice about orthodontic treatment from general dentists. Although they are eventually referred to a specialist, general practitioners play a vital role in communication between patients and orthodontists.[3] Consequently, a precise pre-operative screening tool for orthognathic surgery is essential for informing patients about the invasiveness of the procedure and financial considerations.

Before orthognathic surgery, cephalometric analysis is a standard tool for orthodontic screening. Previously, cephalometric tracings were performed manually, a process that is time-consuming and requires specialists in orthodontics and maxillofacial surgery for intricate interpretations. Manual tracing also has disadvantages, including errors in tracing, landmark identification, and measurement reproducibility. To address these limitations, advancements in computer technology have led to the digitalization of cephalometric analysis. Similarly, clinical decision support systems have been developed to enhance clinicians’ complex decision-making processes. These systems have been applied in orthodontics for various tasks. For instance, artificial intelligence (AI) assists with treatment planning for orthodontic treatment[4,5] and orthognathic surgery diagnosis.[6,7]

Various angular and linear cephalometric measurements, such as the ANB angle and Wits appraisal, have been suggested to evaluate the sagittal maxillomandibular discrepancies. However, these parameters often rely on unstable landmarks due to growth or poor reproducibility. In contrast, the MKG angle is a novel parameter that does not depend on uncertain landmarks or the functional occlusal plane. It utilizes three stable points: point M (the midpoint of the premaxilla), point K (the lowest point on the outline of the key ridge), and point G (the midpoint of the mandibular symphysis).[8]

This study applied MKG parameters to artificial neural network (ANN) machine-learning algorithms on two-dimensional (2D) lateral cephalograms to develop a decision-making system for orthognathic diagnosis. Therefore, the objective of this study was to develop a clinical decision support system using the MKG angle as a neural network parameter, serving as a pre-operative screening tool for predicting the need for orthognathic surgery.

MATERIAL AND METHODS

This study was divided into four main parts: Ethics approval, data collection, object detection, and model construction.

Data collection

The samples used in this retrospective study consisted of 494 2D digital lateral cephalograms taken with the CS9000C or the Kodak 9000C (Carestream Health, Inc., Atlanta, GA, USA) from patients who visited the Department of Oral and Maxillofacial Radiology, Faculty of Dentistry, Mahidol University, Bangkok, Thailand, between 2012 and 2021. The inclusion criteria were Thai ethnicity, age between 20 and 40 years, and the presence of permanent dentition. Exclusion criteria included individuals with missing teeth or unerupted permanent teeth, except for third molars, as well as those with recognizable craniofacial abnormalities, deformities, or prior orthodontic treatment, plastic surgery, or other maxillofacial surgery. The MKG angle, based on the definitions provided in [Table 1] and utilizing three skeletal anatomical landmarks, is a novel parameter for assessing the sagittal relationship between the maxilla and mandible. This angle was measured by investigators and specialists for all subjects.

Table 1: The definitions of three skeletal anatomical landmarks of MKG angle.
Anatomical landmarks Definitions
M Midpoint of the premaxilla. This point is identifiedby the center of the largest circle placed tangent to the premaxilla’s anterior, superior (represented by the nasal floor), and palatal surfaces.
K The lowest point of infrazygomatic crest is a strongbone buttress that serves as a support for the maxillary first molar.
G Center of the largest circle placed tangent to the mandibular symphysis’s internal inferior, anterior, and posterior surfaces.

To determine the centers of the premaxilla and mandibular symphysis, a template consisting of circles with diameters increasing in 0.5-mm increments was created. The center of the template’s circle was used to determine points M and G in the tracings. For all samples, the automatic digital cephalometric tracing tool WebCeph software (AssembleCircle Corp., Gyeonggi-do, Republic of Korea) was used to trace three reference points (point M, point K, and point G) and to measure the MKG angle from each patient’s digital lateral cephalometric radiograph. The tracings and measurements performed by this software were compared to conventional tracing, which is considered one of the gold-standard diagnostic aids in orthodontics.[9] All tracings and measurements were performed by three calibrated investigators – one investigator had previously received training from an orthodontist, while the other two were experienced orthodontists: SM, who has 25 years of experience and is currently an Associate Professor, and ST, who has 20 years of experience and is currently an Assistant Professor. To examine measurement errors, the investigators independently took measurements under the same environmental conditions. All images were identified by codes and analyzed in random order to ensure blinding of the investigators.

Object detection

The Vision Marker II, a web tool developed by Digital Storemesh, was employed to manually annotate all 494 radiographic images. These annotations were subsequently validated by expert orthodontists. Following this validation, Region-based Convolutional Neural Network (R-CNN)-based Keypoint detection models were incorporated into Detectron2,[10,11] a platform from Facebook AI Research created in Google Colaboratory for object detection. This platform facilitated Python coding and execution for locating and labeling three skeletal anatomical landmarks, as well as for measuring the MKG angle [Figure 1].

Lateral cephalometric radiograph, labeled by Detectron2, shows three anatomical landmarks (M, K, and G points) connected by a green line to illustrate the MKG angle, with its value displayed in blue.
Figure 1:
Lateral cephalometric radiograph, labeled by Detectron2, shows three anatomical landmarks (M, K, and G points) connected by a green line to illustrate the MKG angle, with its value displayed in blue.

Model construction

A total of 494 individuals were randomly allocated to either the learning set (also known as the training set) or the test set, and 396 were assigned to the learning set while 98 were assigned to the test set. The learning set served as the data from which the algorithm learned to generate the predictive model. Overfitting can occur when the model excels at classifying samples in the training set but fails to generalize to unseen data samples. To mitigate this risk, a validation set was separated from the training set and used to validate the model performance during training. Iterative learning was terminated when validation errors no longer decreased significantly, and the best-fit model was selected. The final model was then applied to the test set, which was used to evaluate its performance and provide an unbiased final model performance metric. A visual representation of the training, test, and validation split is shown in [Figure 2].

Train-valid-test split in machine learning.
Figure 2:
Train-valid-test split in machine learning.

The MKG angle measurement values from the lateral cephalograms were used as input for the ANN. Normalization was performed to scale the input data to a range from 0 to 1. The applied machine-learning model comprised a four-layer neural network: An input layer, two hidden layers (with 64 nodes in the first layer and 24 nodes in the second layer), and an output layer. The model underwent training for 10,000 epochs with a batch size of 32 and a learning rate of 0.01. Activation functions were applied to enhance deep network learning, utilizing a rectified linear unit[12] for the hidden layers and a sigmoid function for the output layer. The neural network models were generated using Keras,[13] a neural network framework in Python, within the Google Colaboratory. Python was also employed for adjusting weight values through backpropagation. The precision of the model was assessed and statistically processed.

Finally, the outcome was displayed as a number between 0 and 1. If the output approached the value of 1, orthognathic surgery was recommended. In contrast, non-orthognathic surgery would be considered if the output reached 0.

Statistical analysis

For object detection, the agreement between two measurement methods used in data collection, WebCeph, and the object detection model, was assessed using the Intraclass Correlation Coefficient (ICC). The significance level was set at P < 0.001 with a 95% confidence interval. These analyses were conducted using the Statistical Package for the Social Sciences (SPSS) Statistics 18.0 (SPSS Inc., Chicago, IL, USA).

For model construction, comprehensive statistical approaches were employed. These included analyzing the training loss graph to monitor the model’s performance over epochs. In addition, a confusion matrix was utilized to evaluate the model’s accuracy and assess its predictive capabilities.

RESULTS

The study included 494 cases with Class I, II, and III skeletal relationships, which were analyzed by two experienced orthodontists (SM and ST). Of these cases, 118 (23.89%) exhibited a Class I skeletal pattern, 95 (19.23%) exhibited a Class II skeletal pattern, and 281 (56.88%) exhibited a Class III skeletal pattern.

The automated digital cephalometric tracing tool, WebCeph, was employed to measure all 494 samples, demonstrating a comparable level of precision and accuracy to manual measurements. It located anatomical landmarks of interest, such as the MKG angle, and these locations and measurements were subsequently verified by experienced orthodontists (SM and ST). The agreement between the two measurement methods, WebCeph, and the object detection model, demonstrates almost perfect agreement, as evidenced by the ICC value of 0.983 within the 95% confidence interval (P < 0.001). In addition, a plot comparing MKG angle measurements from the two methods is presented in [Figure 3] to further illustrate the agreement.

The plot compares MKG angle measurements from two methods, WebCeph™ and the object detection model, with each red dot representing an individual data point to demonstrate the high agreement between the measurements.
Figure 3:
The plot compares MKG angle measurements from two methods, WebCeph™ and the object detection model, with each red dot representing an individual data point to demonstrate the high agreement between the measurements.

A loss function was utilized to assess the performance of the object detection model. As the output approaches 0, it indicates the model’s performance and successful training. The training loss value for the object detection model is 3.0510, as illustrated by the graph showing a continuous decrease in loss until it stabilized [Figure 4]. Model performance metrics, such as the root mean square error (RMSE) [14] and the percentage of detected joints (PDJ)[15] were also employed. The RMSE value was measured at 2.68 pixels. The PDJ, which was computed with a threshold of 0.05, measured the distance between the prediction and the ground truth, resulting in a value of 0.99, indicating a high level of accuracy.

The object detection model’s training loss graph, which was completed by Python in Google Colaboratory, illustrates a decrease in training loss to the point of stability. The value of the training loss is 3.0510.
Figure 4:
The object detection model’s training loss graph, which was completed by Python in Google Colaboratory, illustrates a decrease in training loss to the point of stability. The value of the training loss is 3.0510.

For model construction, the model achieved an orthognathic surgery diagnosis accuracy of 70.41%, with a training loss value of 0.6163. The training loss graph for predicting orthognathic surgery shows a decrease in training loss until it reaches a point of stability, as illustrated in [Figure 5]. In addition, instances of misdiagnosis are shown in a confusion matrix [Figure 6], which indicates that out of 98 cases, 29 were misdiagnosed. The sensitivity of the model was 72.5%, and the specificity was 68.97%.

The training loss graph of the model for predicting orthognathic surgery illustrates a decrease in training loss to the point of stability. The value of the training loss is 0.6163.
Figure 5:
The training loss graph of the model for predicting orthognathic surgery illustrates a decrease in training loss to the point of stability. The value of the training loss is 0.6163.
A confusion matrix illustrating classification results for orthognathic surgery cases, showing 29 true positives, 11 false negatives, 18 false positives, and 40 true negatives out of a total of 98 cases. The colors range from darker shades (indicating lower values) to lighter shades (indicating higher values), visually emphasizing the distribution of classifications.
Figure 6:
A confusion matrix illustrating classification results for orthognathic surgery cases, showing 29 true positives, 11 false negatives, 18 false positives, and 40 true negatives out of a total of 98 cases. The colors range from darker shades (indicating lower values) to lighter shades (indicating higher values), visually emphasizing the distribution of classifications.

DISCUSSION

This study reports the development of a decision-making system for orthognathic diagnosis utilizing novel anatomical landmarks, specifically the MKG angle, which comprises points M, K, and G that represent distinct anatomical features. This system is based on a Keypoint R-CNN for locating and annotating these anatomical points. The Keypoint R-CNN is a type of neural network recognized for its effectiveness in object localization within 2D images. The Keypoint R-CNN can accurately identify boundary points for various categories of objects.[16]

Cephalometric analysis is a standard tool for pre-operative orthodontic screening, definitive diagnosis, decision-making, and treatment planning. It illustrates the relationships between skeletal structures, teeth, and facial soft tissues.[17] Despite being considered the “gold standard,” manual cephalometric tracing and analysis are time-consuming. WebCeph, a 2D AI-driven cephalometric software, is now available as both a web-based platform and a mobile phone application. This software stands out for its AI-powered automated identification of anatomical landmarks and measurements. It has demonstrated strong agreement with manual tracing, making it a valuable tool for routine cephalometric analysis and clinical research.[9,18]

In this study, there was almost perfect agreement between the two measurement methods, WebCeph, and the object detection model. Moreover, the model performance metrics, including RMSE and PDJ values, demonstrated a satisfactory level of accuracy.

In previous studies, the success rates of neural network-based decision support systems for orthognathic surgery were investigated. Choi et al.[6] utilized an ANN-based system, demonstrating a 96% diagnostic agreement between the actual diagnosis and the AI model’s prediction. Similarly, Lee et al.[19] employed deep convolutional neural networks, achieving diagnostic agreement rates ranging from 95.4% to 96.4%. In addition, Chaiprasittikul et al.[7] developed and validated a Keypoint R-CNN specifically for lateral cephalometric image detection and deep learning classification, showing a diagnostic agreement of 96.3%. While no previous studies have utilized the MKG angle as a parameter, this study applies the MKG angle to an ANN machine-learning algorithm using 2D lateral cephalograms. This novel approach establishes the MKG angle as a key component in a pre-operative decision-making system for orthognathic diagnosis.

However, the accuracy of deep learning models significantly depends on the volume of available training data. To enhance our model’s accuracy, future research could focus on augmenting the training set.

Annotating the reference locations for the MKG angle (points M and G) manually can be challenging. The M point is located at the midpoint of the largest circle that touches the surfaces of the premaxilla and the nasal floor, while the G point refers to the midpoint of the largest circle that contacts the surfaces of the mandibular symphysis. In this study, constructing templates with circles that have diameters increasing in 0.5-mm increments can aid in accurately determining the centers of the premaxilla and mandibular symphysis. Furthermore, the utilization of 2D cone-beam computed tomography (CBCT) [20] may enhance the AI-assisted object detection method, allowing for more accurate localization of the M, K, and G reference points, thereby improving the prediction accuracy of the model in future studies.

Due to most samples originating from a tertiary hospital (the Faculty of Dentistry in an academic university, which receives referrals from primary and secondary care providers), the distribution of skeletal classes was unbalanced, particularly after 2015, with a predominance of individuals exhibiting Class III skeletal patterns and some borderline cases. This imbalance affected the accuracy of the deep learning results. In addition to increasing the sample size, balancing the distribution of skeletal classes (Class I, II, and III) and excluding borderline cases from the training set could improve the model’s learning ability, thereby enhancing prediction accuracy in future studies.

Since the ANN used only the MKG angle measurement values as input, the model’s ability to learn from other influential variables may have been limited. In addition, the potential of the MKG angle as a marker for assessing jaw discrepancy, discovered in 2020 by Chachada et al., has been supported by only a few studies.[8,21] Therefore, we suggest augmenting the training set with additional parameters, including the MKG angle along with other common cephalometric sagittal parameters such as ANB angle and Wits appraisal to enhance the model’s prediction capabilities and reduce the likelihood of misdiagnosis.

CONCLUSION

A supplementary tool for orthognathic screening, utilizing 2D digital lateral cephalograms and the MKG angle as a parameter, was developed by integrating a neural network model with clinical decision-making. To enhance its efficacy, this AI-assisted orthognathic screening could benefit from increasing the number of cephalograms in the training set and incorporating additional influential parameters. One approach could involve combining the MKG angle with other common cephalometric sagittal parameters in larger sample sizes, balancing the distribution of skeletal classes (Class I, II, and III), and excluding borderline cases. Furthermore, the utilization of 2D CBCT to address superimposed anatomical landmarks in future studies may improve diagnostic accuracy.

Acknowledgment

We would like to express our gratitude to Assistant Professor Wasit Limprasert for his valuable technical guidance and support. We also appreciate the technological assistance provided by Digital Storemesh Co. Ltd.

Ethical approval

The research/study was approved by the Institutional Review Board at the Institutional Review Board (IRB) of the Faculty of Dentistry/Faculty of Pharmacy, Mahidol University, Bangkok, Thailand, number COA.No.MU-DT/PY-IRB 2022/052.0510.

Declaration of patient consent

Patient’s consent not required as patients identity is not disclosed or compromised.

Conflicts of interest

There are no conflicts of interest.

Use of artificial intelligence (AI)-assisted technology for manuscript preparation

The authors confirm that there was no use of artificial intelligence (AI)-assisted technology for assisting in the writing or editing of the manuscript and no images were manipulated using AI.

Financial support and sponsorship

Nil.

References

  1. . Bayesian network analysis: A new approach to diagnosis and prognosis. J Dent Med Sci. 2016;15:1-4.
    [CrossRef] [Google Scholar]
  2. , . Cephalometric methods of prediction in orthognathic surgery. J Maxillofac Oral Surg. 2011;10:236-45.
    [CrossRef] [PubMed] [Google Scholar]
  3. , . Referring adult patients for orthodontic treatment. J Am Dent Assoc. 1999;130:73-9.
    [CrossRef] [PubMed] [Google Scholar]
  4. . Bayesian-based decision support system for assessing the needs for orthodontic treatment. Healthc Inform Res. 2018;24:22-8.
    [CrossRef] [PubMed] [Google Scholar]
  5. , , . Artificial neural network modeling for deciding if extractions are necessary prior to orthodontic treatment. Angle Orthod. 2010;80:262-6.
    [CrossRef] [PubMed] [Google Scholar]
  6. , , , , , , et al. Artificial intelligent model with neural network machine learning for the diagnosis of orthognathic surgery. J Craniofac Surg. 2019;30:1986-9.
    [CrossRef] [PubMed] [Google Scholar]
  7. , , , , , . Application of a multi-layer perceptron in preoperative screening for orthognathic surgery. Healthc Inform Res. 2023;29:16-22.
    [CrossRef] [PubMed] [Google Scholar]
  8. , , , , , . MKG angle: A true marker for maxillomandibular discrepancy. J Indian Orthod Soc. 2020;54:220-5.
    [CrossRef] [Google Scholar]
  9. , , , , . Evaluation of fully automated cephalometric measurements obtained from web-based artificial intelligence driven platform. BMC Oral Health. 2022;22:132.
    [CrossRef] [PubMed] [Google Scholar]
  10. , , , , . Detectron2: A PyTorch-based modular object detection library [open source on the internet] . Menlo Park CA: Facebook; Available from: https://ai.meta.com/blog/-detectron2-a-pytorch-based-modular-object-detection-library [Last accessed on 2024 May 31]
    [Google Scholar]
  11. , , , , . Detectron2 [open source on the internet] . San Francisco, CA: Github.com; Available from: https://github.com/facebookresearch/detectron2 [Last accessed on 2024 May 31]
    [Google Scholar]
  12. . Deep learning using rectified linear units (ReLU) [monograph on the internet] . Ithaca, NY: arXiv.org; Available from: https://arxiv.org/abs/1803.08375 [Last accessed on 2024 May 31]
    [Google Scholar]
  13. , . Deep learning with Keras Birmingham, UK: Packt Publishing Ltd.; .
    [Google Scholar]
  14. , , . Evaluation metrics for deep learning imputation models In: , , , eds. AI for disease surveillance and pandemic intelligence. 1st ed. Cham: Springer International Publishing; . p. :309-22.
    [CrossRef] [Google Scholar]
  15. , , . Comparative analysis of skeleton-based human pose estimation. Future Internet. 2022;14:380.
    [CrossRef] [Google Scholar]
  16. , , , , , . Joint object contour points and semantics for instance segmentation. Expert Syst. 2024;41:e13504.
    [CrossRef] [Google Scholar]
  17. , , , . Cephalometric analysis for orthognathic surgery. Ann Essences Dent. 2015;7:1-10.
    [Google Scholar]
  18. , , , , . Reproducibility of linear and angular cephalometric measurements obtained by an artificial-intelligence assisted software (WebCeph) in comparison with digital software (AutoCEPH) and manual tracing method. Dental Press J Orthod. 2023;28:e2321214.
    [CrossRef] [PubMed] [Google Scholar]
  19. , , , , . Deep convolutional neural networks based analysis of cephalometric radiographs for differential diagnosis of orthognathic surgery indications. Appl Sci. 2020;10:2124.
    [CrossRef] [Google Scholar]
  20. , , , . Precision of cephalometric landmark identification: cone-beam computed tomography vs conventional cephalometric views. Am J Orthod Dentofacial Orthop. 2009;136:312.e1-10. discussion 312-3
    [CrossRef] [PubMed] [Google Scholar]
  21. , , , , , . Predictability of MKG angle and comparison with ANB angle, W angle, Yen angle, Beta angle and Pi angle a cephalometric study. RUHS J Health Sci. 2023;8:160-3.
    [CrossRef] [Google Scholar]
Show Sections