Are plain radiographs reliable in Lichtman classification?
Ahmet Nadir Aydemir1, Mehmet Yücens1, Cengiz Eren Cansu2, Ahmet Fahir Demirkan1
1Department of Orthopedics and Traumatology, Pamukkale University Faculty of Medicine, Denizli, Turkey
2Department of Orthopedics and Traumatology, Marmara University Faculty of Medicine, Istanbul, Turkey
Keywords: Agreement, classification, Kienböck, Lichtman, reliability
Abstract
Objectives: This study aims to investigate the reliability of the Lichtman classification among residents, orthopedic surgeons, and hand surgeons.
Materials and methods: This study was carried out with 30 male observers (mean age 37.8 years; range, 26 to 62 years) who agreed to participate in the study. All observers were orthopedic surgeons. The observers were separated into three groups that consist of 10 residents, 10 orthopedic surgeons, and 10 hand surgeons. The anteroposterior and lateral wrist radiographs of 20 patients (12 males, 8 females; mean age 49 years; range, 38 to 74 years) diagnosed as Kienböck’s disease were sent to observers via e-mail as a survey. All 40 radiographs were asked to be kept classified.
Results: The classification of Kienböck’s disease was analyzed by 30 observers on 40 digital radiographs. The overall agreement with the Lichtman classification was fair within all of the observers (kappa=0.203). When groups were evaluated within themselves, the agreement level was found poor in group 1 (kappa=0.162) and fair in group 2 (kappa=0.210) and group 3 (kappa=0.252).
Conclusion: A useful classification system in orthopedics and traumatology should classify the type of musculoskeletal disorder reliably, facilitate communication in clinical practice, guide preoperative planning, and enable comparison of results between studies. The Lichtman classification alone is insufficient and should be supported by other imaging and measurement techniques.
Introduction
Kienböck's disease is an isolated avascular necrosis of the lunate resulting in wrist degeneration at the end stage of the disorder. The disease was defined by a radiologist named Robert Kienböck and named after him.[1] Treatment options of the lunate avascular necrosis ranges from splinting to wrist arthrodesis. The most important factor that determines the choice of treatment is the stage of the disease. There are various classifications reported in the literature that defines the stage of the Kienböck’s disease.[2-4]
Lichtman classification system is one of the most known and applied classifications in Kienböck’s disease.[4] Lichtman and colleagues initially grouped the disease in four different stages that had a later modification: stage III was divided into stage IIIA and IIIB.[5] In stage I, plain radiographic findings of the disease are normal and diagnosis can be established by magnetic resonance imaging. Stage II is characterized by increased lunate density with normal bony architecture. Collapse of lunate bone seen in stage III. Carpal stability is retained in stage IIIA, while stage IIIB is characterized with carpal instability. At the final stage, stage IV, pancarpal arthrosis is observed.
Classifications allow us to speak a common language, enabling a similar treatment plan and thus providing faster treatment. However, reliability of classifications is expected to be high. In this study, we aimed to investigate the reliability of the Lichtman classification among residents, orthopedic surgeons, and hand surgeons.
Patients and Methods
This study was conducted at Pamukkale University Faculty of Medicine between September 2018 and February 2019. The study was carried out with 30 male observers (mean age 37.8 years; range, 26 to 62 years) who were all orthopedic surgeons. The observers were separated into three groups that consist of 10 residents (group 1), 10 general orthopedic surgeons (group 2) and 10 certificated hand surgeons (group 3). The anteroposterior and lateral wrist radiographs of 20 patients (12 males, 8 females; mean age 49 years; range, 38 to 74 years) diagnosed as Kienböck’s disease were sent to observers via e-mail as a survey. All of the radiographs belonged to adult patients and none of them were postoperative radiographs. After providing brief information about Lichtman classification in the survey, 40 radiographs were asked to be classified. Each question of the survey had five different options that consist of stage I, stage II, stage IIIA, stage IIIB and stage IV. The observers were asked to mark only one option in each question. Radiographs were taken from hospital archive. Therefore, ethics committee approval was not required for the study. A written informed consent was obtained from each patient. The study was conducted in accordance with the principles of the Declaration of Helsinki.
Statistical analysis
For statistical analysis, Number Cruncher Statistical System 2007 (Kaysville, Utah, USA) program was used. Kappa coefficient was used to determine the reliability of observers.[6,7] The ranges and significance of kappa was shown in Table I. Statistical significance was accepted as p<0.05.
Results
The classification of Kienböck’s disease was analyzed by 30 observers on 40 digital radiographs. The distribution of the survey responses of the groups by Lichtman classification was shown in Figures 1-3. The overall agreement with the Lichtman classification was fair within all of the observers. When groups were evaluated within themselves, agreement level was poor in group 1 and fair in groups 2 and 3. The agreement of all of the observers and groups was shown in Table II. When agreement level was evaluated according to stages of the disease, the level of agreement for stages I and IV increased to moderate in group 3. The agreement of all of the observers and groups according to stages of the disease was shown in Table III.
Discussion
The aim of this study was to investigate the reliability of the Lichtman classification among residents, orthopedic surgeons and hand surgeons. The agreement level was found to be low within all of the observers and in groups separately.[8] The reliability of Lichtman classification was investigated in the literature before. A study of Jensen et al.[9] investigated the reliability of non-modified four-stage Lichtman classification. A total of 76 radiographs of 38 patients were assessed by one hand surgeon, one orthopedist and one orthopedic surgeon with interest in hand surgery. They reported a moderate interobserver agreement with kappa values ranging from 0.45 to 0.52. Kappa value reduced to 0.34 when observers evaluated according to Stahl classification under five stages. However, there are other studies in the literature that did not confirm these results. [10-12] These studies were all performed with four observers, the radiographs were evaluated according to Lichtman classification and kappa values were reported to be between 0.62 and 0.71.
Statistically significantly higher kappa values were observed in the hand surgeons group in Lichtman stages I and IV. Since the hand surgeons group was the most experienced compared to the other groups, such a result was not surprising. However, interestingly, the agreement level could not get over moderate in this group either. Non- modified four-stage Lichtman classification was not used in this study. Probably, the agreement level would be higher because of the decreased number of options. Additionally, we did not use oblique radiographs or radioscaphoid angle measurements as performed by Goldfarb et al.[10] They reported that the interobserver reliability increases with the use of oblique radiographs and radioscaphoid angle. We agree with the authors and suggest that Lichtman classification alone is insufficient and should be supported by other imaging and measurement techniques.
This study has some strong aspects that may make it more powerful than other studies in the literature. In the current study, the number of observers was significantly higher than other studies. Furthermore, all of the observers worked in orthopedic surgery department, which is distinct from other studies. Observers were separated into three groups according to their proficiency, thus the groups were designed as homogenous as possible. Moreover, the observers answered the survey by e-mail without being affected by other observers or researchers of the study. Finally, the reliability of observers was determined with both kappa and Gwet’s agreement coefficient (Gwet’s AC) values.
However, there were some limitations to this study. Firstly, only the results of interobserver agreement were evaluated. Therefore, no conclusions could be established about intraobserver agreement. The second limitation was that the authors of the study were unable to control the survey filling stage due to the fact that the study was conducted via e-mail survey. All wrist radiographs were sent to the observers as digital X-rays. Besides, variables such as radiation dose, shooting distance and resolution of all radiographs could not be kept constant. Moreover, there were no other classification systems included as control groups. Therefore, we could not establish the superiority or inferiority of the Lichtman classification compared to other classification systems. Another requirement of classifications is that they should provide correct planning for the treatment. Unfortunately, the efficacy of the Lichtman classification on treatment was not evaluated in the current study.
In conclusion, a useful classification system in orthopedics and traumatology should classify the type of musculoskeletal disorder reliably, facilitate communication in clinical practice, guide preoperative planning, and enable comparison of results between studies. We found a fair agreement level between observers when we evaluated the reliability of modified Lichtman classification.
Citation: Aydemir AN, Yücens M, Cansu CE, Demirkan AF. Are plain radiographs reliable in Lichtman classification?. Jt Dis Relat Surg 2020;31(1):34-38.
The authors declared no conflicts of interest with respect to the authorship and/or publication of this article.
The authors received no financial support for the research and/or authorship of this article.
References
- Wagner JP, Chung KC. A historical report on Robert Kienböck (1871-1953) and Kienböck's Disease. J Hand Surg Am 2005;30:1117-21.
- Bain GI, Begg M. Arthroscopic assessment and classification of Kienbock’s disease. Tech Hand Up Extrem Surg 2006;10:8-13.
- Decoulx P, Marchand M, Minet P, Razemon JP. Kienbock's disease in miners; clinical & pathogenic study with analysis of 1330 x-rays of the wrist. Lille Chir 1957;12:65-81
- Lichtman DM, Mack GR, MacDonald RI, Gunther SF, Wilson JN. Kienböck's disease: the role of silicone replacement arthroplasty. J Bone Joint Surg [Am] 1977;59:899-908.
- Alexander AH, Turner MA, Alexander CE, Lichtman DM. Lunate silicone replacement arthroplasty in Kienböck's disease: a long-term follow-up. J Hand Surg Am 1990;15:401-7.
- Landis JR, Koch GG. The measurement of observer agreement for categorical data. Biometrics 1977;33:159-74.
- Gwet KL. Computing inter-rater reliability and its variance in the presence of high agreement. Br J Math Stat Psychol 2008;61:29-48.
- Atik OŞ. Is there something new and interesting in my article? Eklem Hastalik Cerrahisi 2019;30:69.
- Jensen CH, Thomsen K, Holst-Nielsen F. Radiographic staging of Kienböck's disease. Poor reproducibility of Ståhl's and Lichtman's staging systems. Acta Orthop Scand 1996;67:274-6.
- Goldfarb CA, Hsu J, Gelberman RH, Boyer MI. The Lichtman classification for Kienböck's disease: an assessment of reliability. J Hand Surg Am 2003;28:74-80.
- Jafarnia K, Collins ED, Kohl HW 3rd, Bennett JB, Ilahi OA. Reliability of the Lichtman classification of Kienböck's disease. J Hand Surg Am 2000;25:529-34.
- Goeminne S, Degreef I, De Smet L. Reliability and reproducibility of Kienbock's disease staging. J Hand Surg Eur Vol 2010;35:555-7.