Design of an Integrated Model Using FusionNet, Graph Neural Networks, and Temporal Convolutional Networks for ACL Tear Diagnosis
Main Article Content
Abstract
Abstract: Anterior cruciate ligament (ACL) tears are common and serious injuries requiring accurate diagnosis for proper treatment. The majority of the deep learning (DL) models, focusing more on MRI data for diagnosis, have been largely missing the integration of clinical data, thus limiting generality across different populations and institutions. In addition, these models are inadequate to handle the domain shifts in various sources of data and are less interpretable, thereby lowering trust among clinicians. In this research study, we propose a comprehensive DL system that tackles these caveats using multimodal data fusion techniques, domain adaptation, and explainability. In this proposed model, FusionNet is a CNN for spatial feature extraction from MRI images combined with a transformer for the processing of clinical data, harnessing two complementary sets of information to increase diagnostic accuracy to a great extent. Interdependencies underlying ACL tears would be captured by using a GNN to model anatomical relations between structures of the knee. Domain generalization would be ensured through the use of DANN, learning a set of domain-invariant features that reduce variations in imaging protocols. TCNs are used for longitudinal analysis of ACL recovery to model the temporal dependencies within sequential MRI and clinical data, predicting recovery outcomes as well as the risk of re-injury. Explainability is achieved through graph-based XAI methods that allow transparency by identifying influential MRI regions and clinical features. The approach features high diagnostic accuracy at 93-95%, robust generalization with 89-91% accuracy across institutions, and improved interpretability thus placing it as a clinically viable tool for the diagnosis and prognosis of ACL tears across a diversity of healthcare settings.
Article Details

This work is licensed under a Creative Commons Attribution-NonCommercial 4.0 International License.