Advancing explainable AI in healthcare: Necessity, progress, and future directions
الملخص
Clinicians typically aim to understand the shape of the liver during treatment planning that could potentially minimize any harm to the surrounding healthy tissues and hepatic vessels, thus, constructing a precise geometric model of the liver becomes crucial. Over the years, various methods for liver image segmentation have emerged, with machine learning and computer vision techniques gaining rapid popularity due to their automation, suitability, and impressive results. Artificial Intelligence (AI) leverages systems and machines to emulate human intelligence, addressing real-world problems. Recent advancements in AI have resulted in widespread industrial adoption, showcasing machine learning systems with superhuman performance in numerous tasks. However, the inherent ambiguity in these systems has hindered their adoption in sensitive yet critical domains like healthcare, where their potential value is immense. This study focuses on the interpretability aspect of machine learning methods, presenting a literature review and taxonomy as a reference for both theorists and practitioners. The paper systematically reviews explainable AI (XAI) approaches from 2019 to 2023. The provided taxonomy aims to serve as a comprehensive overview of XAI method traits and aspects, catering to beginners, researchers, and practitioners. It is found that explainable modeling could potentially contribute to trustworthy AI subject to thorough validation, appropriate data quality, cross validation, and proper regulation.
معرّف المصادر الموحد
https://www.sciencedirect.com/science/article/pii/S1476927125002609المجموعات
- أبحاث الطب [1891 items ]