Sultan Mujib Dabiry1, Yunus Demirtaş2, Fuat Türk3, Tuğrul Yıldırım2, Gökhan Ayık4, Gökhan Çakmak2

1Department of Emergency Medicine, Medical Park Ankara Hospital, Ankara, Türkiye
2Department of Orthopedics and Traumatology, Yüksek İhtisas University, Ankara, Türkiye
3Gazi University, Computer Engineering, Faculty of Technology, Ankara, Türkiye
4Department of Orthopedics and Traumatology, Hacettepe University Faculty of Medicine, Ankara, Türkiye

Keywords: Artificial intelligence, deep learning, magnetic resonance imaging, osteochondral lesions of talus, ResNet50.

Abstract

Objectives: This study aims to evaluate the diagnostic performance of a ResNet50-based convolutional neural network (CNN) in detecting osteochondral lesions of the talus (OLTs) on magnetic resonance imaging (MRI) and to compare its efficacy between T1- and T2- weighted sequences.

Materials and methods: A total of 219 ankle MRI scans were reviewed retrospectively, including 60 with confirmed OLTs and 159 without lesions. From each study, coronal and sagittal T1- and T2-weighted images were extracted and standardized to 224 × 224 pixels. Augmentation techniques were applied to strengthen model training. Data were divided into training, validation, and test sets in a 60:20:20 split. A ResNet50 model initialized with ImageNet weights was fine-tuned using crossentropy loss with class weighting. Diagnostic performance was summarized with accuracy, precision, recall, and F1-scores.

Results: The model performed better on T1 sequences, achieving an accuracy of 94.1% (95% confidence interval [CI] 88.3-97.1%) and an area under the curve [AUC] of 0.93 (95% CI 0.87-0.97), with patient cases classified at 0.92 precision and 0.82 recall. Healthy controls in the T1 group were recognized with 0.95 precision and 0.98 recall. In contrast, T2 sequences were less reliable, showing an accuracy of 87.2% (95% CI 80.5-91.9%) and an AUC of 0.91 (95% CI 0.85-0.95). Precision for patient cases in the T2 group was notably lower (0.65) despite a recall of 0.81. Misclassifications were more frequent in the T2 dataset, as evidenced by the confusion matrices.

Conclusion: Even with a relatively modest dataset, the ResNet50 model delivered strong results for T1-weighted MRI. While T2 images proved more challenging, suggesting that deep learning can add value to routine assessment of OLTs.

Introduction

Osteochondral lesions of the talus (OLTs) are relatively common, yet may present diagnostic difficulties. First described by Kappis[1] as osteochondritis dissecans, the term “OLT” encompasses a variety of related pathologies, i ncludi ng osteoc hondr it is disseca n s, osteochondral defects, and small fractures involving both cartilage and bone.[2] These lesions affect both the articular cartilage and the underlying subchondral bone of the ankle joint. Due to their insidious onset and variable presentation, OLTs can remain asymptomatic or cause only intermittent pain, leading to delayed or missed diagnosis.[3] Symptom severity tends to escalate with progressive detachment of the lesion, potentially resulting in joint swelling, a sensation of instability, and, in rare cases, mechanical locking of the ankle.[4,5]

Magnetic resonance imaging (MRI) is considered the gold standard for diagnosing OLTs owing to its ability to visualize cartilage integrity, subchondral bone changes, and associated soft-tissue abnormalities.[6] In clinical practice, both T1- and T2-weighted sequences are routinely used, as they highlight different tissue characteristics and pathological features. However, accurate diagnosis usually requires a careful combination of advanced imaging techniques with detailed clinical evaluation by a specialist familiar with ankle anatomy. Whether these sequence-dependent differences translate into measurable differences in diagnostic performance when assessed using deep learning (DL) models remains unclear.

Over the past few years, convolutional neural networks (CNNs), a subset of DL, have become widely adopted in analysis of biomedical images, offering significant advantages over traditional rule-based approaches. These models have demonstrated high performance across various modalities and applications, including tumor grading on MRI,[7] image analysis of breast cancer,[8] diagnosing and grading knee osteoarthritis on plain radiographs,[9] diagnosing supraspinatus tears on MRI images,[10] classification of thyroid nodules in ultrasound,[11,12] and detection of pulmonary nodules in computed tomography (CT) scans.[13] Despite these advances, the application of DL in musculoskeletal imaging, particularly for detecting osteochondral lesions, remains relatively limited. This is partly attributable to the relative rarity of OLTs, the scarcity of large, well-annotated MRI databases, and the heterogeneity of lesion appearance across imaging planes and MRI sequences. In a recent study, Wang et al.[14] developed a CNN model achieving promising diagnostic performance for lesion detection only focused on T2-weighted patient images. To date, comparative analyses evaluating DL performance across different MRI sequences, while incorporating both healthy controls and patients, have not been reported.

In the present study, we hypothesized that DL, augmented by transfer learning and data preprocessing strategies, could achieve high diagnostic accuracy and offer a viable decision-support tool in musculoskeletal radiology. We, therefore, aimed to investigate the performance of a ResNet50-based CNN in diagnosing talar OCDs using T1- and T2-weighted MRI images and to assess the feasibility and sequence-specific behavior of DL-based detection of talar lesions.

Patients and Methods

Patient selection and image collection

This single-center, retrospective study was conducted at Liv Hospital Ankara, Department of Orthopedics and Traumatology between February 2025 and September 2025. We retrospectively screened a total of 478 ankle and foot MRI scans from our hospital’s radiology archive using the tags “foot” and “ankle” respectively, without stratification by sex or time restriction. The screening was conducted over three independent sessions. All images were reviewed by three orthopedic surgeons with over five years of clinical experience, who identified and selected scans with OLTs (OLT-positive) and scans without OLTs (OLT-negative) for inclusion in the study. Case classification was based on imaging findings documented in the original radiology reports. Reviewers were blinded to patient clinical data beyond the MRI images and reports. The study protocol was approved by the Liv Hospital Ankara Ethics Committee (Date: 01.09.2025, No: 2025/026). The study was conducted in accordance with the principles of the Declaration of Helsinki.

A total of 159 MRI scans from patients without any identifiable osteochondral lesion of the talus (OLT) and 60 scans from patients with confirmed OLTs were included based on pre-specified inclusion and exclusion criteria. Inclusion criterion was having a confirmed diagnosis of OLT and exclusion criteria were having previous surgeries around ankle, infection and bone tumor in talus and poor image quality. For each case, T1- and T2-weighted coronal and sagittal plane images were selected. Representative images were directly extracted from the Digital Imaging and Communications in Medicine (DiCOM) system as two-dimensional (2D) images for further processing. To ensure consistency and reduce selection bias, image slices were chosen based on the consensus of three orthopedic surgeons.

Dataset preparation

The dataset utilized in this study comprises MRI images classified into two primary groups: “patients” and “healthy individuals”. For each group, the corresponding images were organized into two separate folders (“T1” and “T2”) representing different MRI sequences. To handle the data loading and preprocessing, we developed a custom dataset class in Python using the PyTorch library. The preprocessing pipeline involved resizing all images to a standard resolution of 224 × 224 pixels, followed by data augmentation was applied exclusively to the training set to improve model robustness and reduce overfitting. Augmentation operations included random horizontal flipping and random rotation within ± 10 degrees. Augmentation was performed on-the-fly during training rather than by generating a fixed number of synthetic images per original image. As a result, each image could be presented to the model in multiple augmented forms across different training epochs, increasing data variability without artificially expanding the dataset size. Images were then converted to PyTorch tensors and normalized using the mean and standard deviation derived from the ImageNet dataset.

The dataset was randomly split into training (60%), validation (20%), and test (20%) sets using a programmatic shuffling procedure implemented in Python with a fixed random seed to guarantee reproducibility. All images had an equal probability of being assigned to any subset. To avoid data leakage, the split was performed at the patient level rather than at the image level, ensuring that images from the same patient were not distributed across different subsets. The validation set was used exclusively for model tuning and convergence monitoring, while the test set was reserved for final evaluation. Given the limited dataset and singlecenter, retrospective study design, this split balanced training adequacy with unbiased performance assessment. Table I outlines the distribution of images across both T1 and T2 folders for patient and healthy groups. Initially, the dataset included 87 patient and 302 healthy subject subfolders. Following data augmentation, 595 images were obtained for T1 and 665 for T2.

Due to class imbalance in the training set, we implemented a class-weighting strategy to reduce biased learning and promote better generalization.

Statistical analysis

Statistical analysis was performed using the Python version 3.10 software (Python Software Foundation, Delaware, USA). To assess the reliability of the reported metrics, 95% confidence intervals (CIs) were calculated for accuracy, precision, recall, and receiver operating characteristics-area under the curve (ROC-AUC) using the Wilson score interval method. This method was selected for its superior coverage and accuracy in datasets with specific class distributions. All performance visualizations, including confusion matrices and ROC curves, were generated using the Scikit-learn and Matplotlib libraries in Python.

Model architecture and training

For classification, we selected the ResNet50 (Residual Network-50) architecture, a deep CNN. The ResNet architectures utilize residual (skip) connections, also known as residual learning, to address challenges such as vanishing gradients and accuracy saturation that arise during the training of deep networks. These connections enable information from earlier layers to be directly transmitted to deeper layers, facilitating the training of deeper architectures.[15,16] To enhance performance with our limited dataset, we applied transfer learning by initializing the model with weights pre-trained on the ImageNet dataset. The convolutional layers of the ResNet50 backbone were fine-tuned using the study dataset rather than being fully frozen, allowing the model to adapt pretrained features to the specific characteristics of ankle MRI images. The final fully connected classification layer was replaced and trained from scratch to perform binary classification.

We used the Cross-Entropy Loss function to evaluate classification error during training and to address class imbalance. Class weights were determined based on the relative frequency of each class in the training set, assigning greater importance to the underrepresented class and penalizing its misclassification more strongly. The learning rate was set to 0.001, and the model was trained for 60 epochs.

Results

Model training and evaluation were conducted on both T1 and T2 image sets using identical hyperparameters. Training was performed for 60 epochs with a learning rate of 0.001. Figures 1 and 2 illustrate the training and validation loss and accuracy curves for T1 and T2, respectively. For the T1-weighted dataset, the validation curve closely followed the training curve, indicating strong generalization. In contrast, the T2-weighted curves showed minor fluctuations in the final epochs, suggesting slight learning instability. For the T1-weighted dataset, the model achieved a final test accuracy of 94.1% (95% CI 88.3-97.1%). For the patient class, the model demonstrated a precision of 0.92 (95% CI 0.75-0.98) and a recall of 0.82 (95% CI 0.64-0.92). Similarly, for the T2-weighted dataset , the model achieved a final test accuracy of 87.2% (95% CI 80.5-91.9%). Precision for the patient class was lower at 0.65 (95% CI 0.46-0.80), despite a recall of 0.81 (95% CI 0.63-0.92).


Confusion matrix analysis and ROC curve further demonstrated this performance gap between the two MRI sequences. The T1-weighted dataset (Figure 3a) showed superior diagnostic reliability, correctly identifying 23 out of 28 patient samples and 89 out of 91 healthy controls. This resulted in an overall accuracy of 94.1%, with a high sensitivity of 82.1% and specificity of 97.8%. The discriminative power of the T1 model was confirmed by the ROC analysis (Figure 3b), which yielded an AUC of 0.93 (95% CI 0.87-0.97), indicating excellent class separation.

In contrast, the T2-weighted dataset (Figure 4a) exhibited lower diagnostic precision, particularly regarding false positives. While the model correctly identified 22 out of 27 patients, it misclassified 12 out of 106 healthy individuals as having pathology. This led to a lower overall accuracy of 87.2% and a specificity of 89.0%. The ROC curve for the T2 dataset (Figure 4b) resulted in an AUC of 0.91 (95% CI 0.85-0.95), which, while strong, statistically trails the T1-weighted performance. The classification performance metrics for both datasets are summarized in Table II. Overall, these findings suggest that T1-weighted sequences provide more distinct radiographic features, allowing the ResNet50 architecture to more effectively differentiate ankle pathology from healthy anatomy. Figure 5 illustrates representative examples of correctly and incorrectly classified T1- and T2-weighted images during testing.



Discussion

In this study, we developed and evaluated a CNN based on the ResNet50 architecture, trained on T1- and T2-weighted MRI images, to diagnose OLTs, assessing its sensitivity and specificity. Our study results indicated stronger diagnostic accuracy with T1-weighted sequences (94%) compared to T2-weighted sequences (87.2%), which may be due to the clearer anatomical detail and subchondral bone contrast offered by T1 imaging. These features likely provided the model with more reliable cues for lesion detection, suggesting that T1-weighted sequences may offer advantages for AI-assisted detection of OLTs.

Comparing our findings to prior research, various DL models have been developed for OLT diagnosis, utilizing different imaging modalities and reporting varied performance metrics. Compared to previous studies, our approach differs in several key aspects. While some previous models employed object detection frameworks or focused on single-sequence inputs, we used a ResNet50-based classification architecture and explicitly compared sequence-specific performance. In addition, our dataset included both OLT-positive and OLT-negative MRI scans, whereas some other studies evaluated only lesionpositive cases.[14] Although the sample size in the present study was relatively modest, performance was assessed using standard classification metrics, allowing a direct comparison between T1- and T2-weighted sequences. To illustrate, in contrast to our model which trained on MRI images, a CNN model trained on anteroposterior ankle radiographs achieved an accuracy of 81.58%, with a sensitivity of 81.6%, specificity of 81.8%, and an AUC of 0.774, significantly outperforming two experienced clinicians who achieved approximately 60% detection accuracy on the same radiographs.[17] Our model demonstrated notably higher diagnostic accuracy, consistent with the established role of MRI as the gold standard for the diagnosis of OLTs. A recent study on meniscus tear detection using YOLOv8 and EfficientNetV2 demonstrated that even with a relatively small dataset of 642 knees, highly accurate results could be achieved.[18] In line with these findings, our study on OLT yielded reliable and precise outcomes despite being based on a comparatively small cohort of 219 patients. A key distinction of our study is that reliable performance was achieved despite a relatively small dataset. This suggests that the model demonstrated promising performance despite the limited dataset; however, external validation is required to determine generalizability.

Regarding MRI-based AI models, a Cascade R-CNN model, also utilizing a ResNet50 backbone, was developed to automatically screen for OLTs using MRI images, reporting an overall mean average precision (mAP) of 0.825 and a mean average recall (mAR) of 0.930. Specifically for lesion detection, this model achieved an average precision of 0.550, while demonstrating higher precision for detecting the talus itself (0.950) and gaps (0.975). While our reported accuracy of 94% on T1-weighted sequences is a high metric, it is comparable to the performance reported by advanced AI systems like the OLTS-AI system, which achieved 96.7% sensitivity, 96.8% specificity, and 96.7% accuracy with an AUC of 0.98 when trained on MRI images for OLT diagnosis.[19]

Diagnosing OLTs fundamentally relies on imaging. Due to its accessibility and low cost, conventional radiography (X-ray) is typically used as the initial imaging modality; however, it has notable limitations. In general, X-rays tend to overlook smaller or early-stage lesions, their notably low sensitivity for detecting OLTs (59 to 70%) often necessitates more advanced imaging techniques. This low diagnostic accuracy often limits its utility in definitively diagnosing OLTs in clinical scenarios. Computed tomography scans offer superior bony detail and excellent specificity (99%), defining lesion location, size and subchondral bone integrity. While CT has a specificity of 99%, its sensitivity (81%) is lower for subtle cartilage-only lesions, and it carries the disadvantage of radiation exposure.[20] Magnetic resonance imaging stands as the clinical gold standard for OLT diagnosis, providing excellent visualization of articular cartilage, subchondral bone, and associated soft tissue injuries. The MRI boasts a high sensitivity of up to 96% and specificity ranging from 89 to 100% for OLT identification, and it is particularly effective at detecting unstable lesions.[6] As emphasized in recent systematic reviews, the limited self-repair ability of cartilage renders OLTs clinically significant, with persistent pain and risk of joint degeneration if left undiagnosed.[21] This further highlights the value of AI-assisted diagnostic models that can reduce missed or delayed detection. However, the diagnostic consistency of MRI can be influenced mainly by the radiologist’s experience, and specialist skills contributing to interobserver variability. Despite its high diagnostic accuracy, OLTs can still be missed by general clinicians, leading to diagnostic errors. To address these challenges, implementing standardized, computer-aided approaches for MRI-based detection of OLTs may improve diagnostic accuracy by reducing reliance on subjective interpretation and minimizing variability and errors that can result from fatigue or inattention.

Despite the promising results, several limitations should be acknowledged. First, the retrospective design and reliance on a single-center dataset may limit the generalizability of the findings across different institutions, imaging protocols, and scanner types. Conducting multi- center prospective studies would strengthen external validity and help confirm reproducibility. Secondly, the limited sample size of 219 scans may reduce statistical power and increase the risk of overfitting. Another limitation of this study is the absence of k-fold cross-validation. Although a fixed train-validation-test split was used to ensure an independent test set, cross-validation could further strengthen the assessment of model generalizability. Utilizing larger datasets in future studies would allow for the incorporation of cross-validation strategies, thereby enhancing the robustness and reliability of the model’s performance. Moreover, the absence of sex data limits assessment of potential sex-based differences in model accuracy, which should be addressed in future studies. Another limitation is the utilization of only 2D axial MRI slices for model training. Although clinically relevant, these slices may not capture the full spatial complexity of osteochondral lesions. Incorporating multi-planar or volumetric imaging could improve detection and characterization. Furthermore, class imbalance within the T2-weighted image group might have negatively impacted model performance despite attempts to mitigate this through class weighting. The model was also trained to perform binary classification, detecting lesion presence or absence without considering lesion grading or related joint abnormalities, which could be clinically significant. Finally, the study lacked a blinded comparison with human experts such as radiologists or orthopedic surgeons, which is important for assessing clinical utility and integration.

In conclusion, we developed a ResNet50-based CNN model to diagnose OLTs using T1- and T2-weighted MRI images in our study. The model showed promising diagnostic accuracy, even with a relatively small dataset, particularly on T1-weighted images, achieving an average accuracy of 94%. Although performance on T2-weighted images was lower, our findings demonstrate the potential of DL approaches to assist in OLT detection and reduce diagnostic variability. However, further studies are needed to enhance model robustness, expand datasets, and validate its clinical utility before routine implementation.

Citation: Dabiry SM, Demirtaş Y, Türk F, Yıldırım T, Ayık G, Çakmak G. High diagnostic accuracy of a resnet50-based deep learning model for osteochondral lesions of the talus on magnetic resonance imaging. Jt Dis Relat Surg 2026;37(2):543-551. doi: 10.52312/jdrs.2026.2719.

Author Contributions

S.M.D.: Collected, curated, and filtered the imaging data, performed the investigation, contributed to study methodology, and prepared the original draft of the manuscript; Y.D., G.A.: Contributed to study methodology and design, conceptualized the study and supervised the project; F.T.: Performed statistical analyses, developed the machine learning code, and prepared statistical visualizations; Y.D., T.Y., G.A., G.Ç.: Contributed to data validation, analysis, and interpretation, critically reviewed and edited the manuscript. All authors read and approved the final manuscript.

Conflict of Interest

The authors declared no conflicts of interest with respect to the authorship and/or publication of this article.

Financial Disclosure

The authors received no financial support for the research and/or authorship of this article.

Data Sharing Statement

The data that support the findings of this study are available from the corresponding author upon reasonable request.

AI Disclosure:
The authors declare that artificial intelligence (AI) tools were not used, or were used solely for language editing, and had no role in data analysis, interpretation, or the formulation of conclusions. All scientific content, data interpretation, and conclusions are the sole responsibility of the authors. The authors further confirm that AI tools were not used to generate, fabricate, or ‘hallucinate’ references, and that all references have been carefully verified for accuracy.

References

  1. Kappis M. Weitere Beiträge zur traumatisch-mechanischen Entstehung der ‘spontanen’ Knorpelablösungen (sogen. Osteochondritis dissecans). Deutsche Zeitschrift für Chirurgie 1922;171:13-29.
  2. Steele JR, Dekker TJ, Federer AE, Liles JL, Adams SB, Easley ME. Republication of “Osteochondral Lesions of the Talus: Current Concepts in Diagnosis and Treatment”. Foot Ankle Orthop 2023;8:24730114231192961. doi: 10.1177/24730114231192961.
  3. Bruns J, Habermann C, Werner M. Osteochondral lesions of the talus: A review on talus osteochondral injuries, including osteochondritis dissecans. Cartilage 2021;13:1380S-401. doi: 10.1177/1947603520985182.
  4. Zanon G, DI Vico G, Marullo M. Osteochondritis dissecans of the talus. Joints 2014;2:115-23. doi: 10.11138/jts/2014.2.3.115.
  5. Santrock RD, Buchanan MM, Lee TH, Berlet GC. Osteochondral lesions of the talus. Foot Ankle Clin 2003;8:73-90, viii. doi: 10.1016/s1083-7515(03)00007-x.
  6. Verhagen RA, Maas M, Dijkgraaf MG, Tol JL, Krips R, van Dijk CN. Prospective study on diagnostic strategies in osteochondral lesions of the talus. Is MRI superior to helical CT? J Bone Joint Surg Br 2005;87:41-6.
  7. Bhardwaj N, Sood M, Gill SS. Design and development of hypertuned deep learning frameworks for detection and severity grading of brain tumor using medical brain MR images. Curr Med Imaging 2024;20:e15734056288248. doi: 10.2174/0115734056288248240309044616.
  8. Rakhlin A, Shvets A, Iglovikov V, Kalinin AA. Deep convolutional neural networks for breast cancer histology image analysis. In: Lecture Notes in Computer Science. 2018;10882:737-44. doi: 10.1007/978-3-319-93000-8_83.
  9. Tiulpin A, Thevenot J, Rahtu E, Lehenkari P, Saarakkala S. Automatic knee osteoarthritis diagnosis from plain radiographs: A deep learning-based approach. Sci Rep 2018;8:1727. doi: 10.1038/s41598-018-20132-7.
  10. Guo D, Liu X, Wang D, Tang X, Qin Y. Development and clinical validation of deep learning for auto-diagnosis of supraspinatus tears. J Orthop Surg Res 2023;18:426. doi: 10.1186/s13018-023-03909-z.
  11. Toro-Tobon D, Loor-Torres R, Duran M, Fan JW, Singh Ospina N, Wu Y, et al. Artificial intelligence in thyroidology: A narrative review of the current applications, associated challenges, and future directions. Thyroid 2023;33:903-17. doi: 10.1089/thy.2023.0132.
  12. Wu GG, Lv WZ, Yin R, Xu JW, Yan YJ, Chen RX, et al. Deep learning based on ACR TI-RADS can improve the differential diagnosis of thyroid nodules. Front Oncol 2021;11:575166. doi: 10.3389/fonc.2021.575166.
  13. Wang C, Shao J, Lv J, Cao Y, Zhu C, Li J, et al. Deep learning for predicting subtype classification and survival of lung adenocarcinoma on computed tomography. Transl Oncol 2021;14:101141. doi: 10.1016/j.tranon.2021.101141.
  14. Wang G, Li T, Zhu L, Sun S, Wang J, Cui Y, et al. Automatic detection of osteochondral lesions of the talus via deep learning. Front Phys 2022;10:815560. doi: 10.3389/ fphy.2022.815560.
  15. He K, Zhang X, Ren S, Sun J. Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR). Las Vegas, NV: IEEE; 2016. p. 770-8. doi: 10.1109/CVPR.2016.90.
  16. Deshpande A, Estrela VV, Patavardhan P. The DCT-CNNResNet50 architecture to classify brain tumors with super-resolution, convolutional neural network, and the ResNet50. Neuroscience Informatics 2021;1:100013. doi: 10.1016/j.neuri.2021.100013.
  17. Shin H, Park D, Kim JK, Choi GS, Chang MC. Development of convolutional neural network model for diagnosing osteochondral lesions of the talus using anteroposterior ankle radiographs. Medicine (Baltimore) 2023;102:e33796. doi: 10.1097/MD.0000000000033796.
  18. Güngör E, Vehbi H, Cansın A, Ertan MB. Achieving high accuracy in meniscus tear detection using advanced deep learning models with a relatively small data set. Knee Surg Sports Traumatol Arthrosc 2025;33:450-6. doi: 10.1002/ ksa.12369.
  19. Feng F, Wang H, Yuan P. Automatic recognition and analysis of talus cartilage lesions based on deep learning. Research Square [Preprint] 2024. doi: 10.21203/ rs.3.rs-5239493/v1.
  20. Khan I, Ranjit S, Welck M, Saifuddin A. The role of imaging in the diagnosis, staging, and management of the osteochondral lesions of the talus. Br J Radiol 2024;97:716-25. doi: 10.1093/bjr/tqae030.
  21. Huang M, Li Y, Liao C, Lai Q, Peng J, Guo N. Microfracture surgery combined with platelet-rich plasma injection in treating osteochondral lesions of talus: A system review and update meta analysis. Foot Ankle Surg 2024;30:21-6. doi: 10.1016/j.fas.2023.09.004.