• English
    • العربية
  • العربية
  • Login
  • QU
  • QU Library
  •  Home
  • Communities & Collections
  • About QSpace
    • Vision & Mission
  • Help
    • Item Submission
    • Publisher policies
    • User guides
      • QSpace Browsing
      • QSpace Searching (Simple & Advanced Search)
      • QSpace Item Submission
      • QSpace Glossary
View Item 
  •   Qatar University Digital Hub
  • Qatar University Institutional Repository
  • Academic
  • Faculty Contributions
  • College of Medicine
  • Medicine Research
  • View Item
  • Qatar University Digital Hub
  • Qatar University Institutional Repository
  • Academic
  • Faculty Contributions
  • College of Medicine
  • Medicine Research
  • View Item
  •      
  •  
    JavaScript is disabled for your browser. Some features of this site may not work without it.

    Advancing explainable AI in healthcare: Necessity, progress, and future directions

    View/Open
    Publisher version (You have accessOpen AccessIcon)
    Publisher version (Check access options)
    Check access options
    1-s2.0-S1476927125002609-main.pdf (2.928Mb)
    Date
    2025-07-26
    Author
    Rashmita Kumari, Mohapatra
    Jolly, Lochan
    Dakua, Sarada Prasad
    Metadata
    Show full item record
    Abstract
    Clinicians typically aim to understand the shape of the liver during treatment planning that could potentially minimize any harm to the surrounding healthy tissues and hepatic vessels, thus, constructing a precise geometric model of the liver becomes crucial. Over the years, various methods for liver image segmentation have emerged, with machine learning and computer vision techniques gaining rapid popularity due to their automation, suitability, and impressive results. Artificial Intelligence (AI) leverages systems and machines to emulate human intelligence, addressing real-world problems. Recent advancements in AI have resulted in widespread industrial adoption, showcasing machine learning systems with superhuman performance in numerous tasks. However, the inherent ambiguity in these systems has hindered their adoption in sensitive yet critical domains like healthcare, where their potential value is immense. This study focuses on the interpretability aspect of machine learning methods, presenting a literature review and taxonomy as a reference for both theorists and practitioners. The paper systematically reviews explainable AI (XAI) approaches from 2019 to 2023. The provided taxonomy aims to serve as a comprehensive overview of XAI method traits and aspects, catering to beginners, researchers, and practitioners. It is found that explainable modeling could potentially contribute to trustworthy AI subject to thorough validation, appropriate data quality, cross validation, and proper regulation.
    URI
    https://www.sciencedirect.com/science/article/pii/S1476927125002609
    DOI/handle
    http://dx.doi.org/10.1016/j.compbiolchem.2025.108599
    http://hdl.handle.net/10576/66939
    Collections
    • Medicine Research [‎1891‎ items ]

    entitlement


    Qatar University Digital Hub is a digital collection operated and maintained by the Qatar University Library and supported by the ITS department

    Contact Us
    Contact Us | QU

     

     

    Home

    Submit your QU affiliated work

    Browse

    All of Digital Hub
      Communities & Collections Publication Date Author Title Subject Type Language Publisher
    This Collection
      Publication Date Author Title Subject Type Language Publisher

    My Account

    Login

    Statistics

    View Usage Statistics

    About QSpace

    Vision & Mission

    Help

    Item Submission Publisher policies

    Qatar University Digital Hub is a digital collection operated and maintained by the Qatar University Library and supported by the ITS department

    Contact Us
    Contact Us | QU

     

     

    Video