Show simple item record

AuthorRashmita Kumari, Mohapatra
AuthorJolly, Lochan
AuthorDakua, Sarada Prasad
Available date2025-08-31T12:17:47Z
Publication Date2025-07-26
Publication NameComputational Biology and Chemistry
Identifierhttp://dx.doi.org/10.1016/j.compbiolchem.2025.108599
CitationMohapatra, R. K., Jolly, L., & Dakua, S. P. (2025). Advancing explainable AI in healthcare: Necessity, progress, and future directions. Computational Biology and Chemistry, 108599.
ISSN1476-9271
URIhttps://www.sciencedirect.com/science/article/pii/S1476927125002609
URIhttp://hdl.handle.net/10576/66939
AbstractClinicians typically aim to understand the shape of the liver during treatment planning that could potentially minimize any harm to the surrounding healthy tissues and hepatic vessels, thus, constructing a precise geometric model of the liver becomes crucial. Over the years, various methods for liver image segmentation have emerged, with machine learning and computer vision techniques gaining rapid popularity due to their automation, suitability, and impressive results. Artificial Intelligence (AI) leverages systems and machines to emulate human intelligence, addressing real-world problems. Recent advancements in AI have resulted in widespread industrial adoption, showcasing machine learning systems with superhuman performance in numerous tasks. However, the inherent ambiguity in these systems has hindered their adoption in sensitive yet critical domains like healthcare, where their potential value is immense. This study focuses on the interpretability aspect of machine learning methods, presenting a literature review and taxonomy as a reference for both theorists and practitioners. The paper systematically reviews explainable AI (XAI) approaches from 2019 to 2023. The provided taxonomy aims to serve as a comprehensive overview of XAI method traits and aspects, catering to beginners, researchers, and practitioners. It is found that explainable modeling could potentially contribute to trustworthy AI subject to thorough validation, appropriate data quality, cross validation, and proper regulation.
SponsorThis work was supported by the Medical Research Center, Hamad Medical Corporation Doha, Qatar, under Grant MRC-01-19-327. Open access funding was provided by Qatar National Library.
Languageen
PublisherElsevier
SubjectLiver segmentation
Machine learning
Artificial intelligence
Liver
Tumor
TitleAdvancing explainable AI in healthcare: Necessity, progress, and future directions
TypeArticle
Volume Number119
Open Access user License http://creativecommons.org/licenses/by/4.0/
ESSN1476-928X
dc.accessType Full Text


Files in this item

Thumbnail

This item appears in the following Collection(s)

Show simple item record