AccScience Publishing / ASH / Online First / DOI: 10.36922/ASH025520003
REVIEW ARTICLE

Explainable artificial intelligence for smart and ethical healthcare

Sergey M. Avdoshin1 Elena Yu. Pesotskaya2*
Show Less
1 School of Computer Engineering, HSE Tikhonov Moscow Institute of Electronics and Mathematics, National Research University Higher School of Economics, Moscow, Russia
2 School of Software Engineering, Faculty of Computer Science, National Research University Higher School of Economics, Moscow, Russia
Received: 26 December 2025 | Revised: 27 January 2026 | Accepted: 2 February 2026 | Published online: 24 February 2026
© 2026 by the Author(s). This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution -Noncommercial 4.0 International License (CC-by the license) ( https://creativecommons.org/licenses/by-nc/4.0/ )
Abstract

SmartHealth technologies are evolving rapidly, and the emerging Medicine 5.0 paradigm highlights the need for artificial intelligence that pairs high performance with explainability, transparency, and ethical soundness. However, many neural- network approaches remain “black boxes,” limiting their uptake in clinical practice, where justification and trust are essential. This article reviews their applications in diagnosis, monitoring of chronic conditions, and clinical decision support, with particular attention to semantic and ontological interoperability, user-centered explanations, and the ethics of personalization. We pair a critical review with a proposed hybrid framework for trustworthy explainable artificial intelligence in healthcare that integrates neural representations with logical rules, delivering role- adaptive, interactive explanations for clinicians and patients.

Graphical abstract
Keywords
Explainable artificial intelligence
Neuro-symbolic artificial intelligence
Explainability
SmartHealth
Funding
None.
Conflict of interest
The authors declare they have no competing interests.
References
  1. World Health Organization. Ethics and governance of artificial intelligence for health: WHO guidance. Available from: https://apps.who.in0665/341996 [Last accessed on 2025 Sep 02].

 

  1. Topol EJ. High-performance medicine: the convergence of human and artificial intelligence. Nat Med. 2019;25(1):44– 56. doi: 10.1038/s41591-018-0300-7

 

  1. Kelly CJ, Karthikesalingam A, Suleyman M, Corrado G, King D. Key challenges for delivering clinical impact with artificial intelligence. BMC Med. 2019;17(1):195. doi: 10.1186/s12916-019-1426-2

 

  1. Rudin C. Stop explaining black box machine learning models for high-stakes decisions and use interpretable models instead. Nat Mach Intell. 2019;1(5):206–215. doi: 10.1038/s42256-019-0048-x

 

  1. Albahri AS, Duhaim AM, Fadhel MA, et al. A systematic review of trustworthy and explainable artificial intelligence in healthcare: Assessment of quality, bias risk, and data fusion. Information Fusion. 2023;96:156–191.doi: 10.1016/j.inffus.2023.03.008

 

  1. Sadeghi Z, Alizadehsani R, Cifci MA, et al. A review of explainable artificial intelligence in healthcare. Computers Elec Eng. 2024;118:109370. doi: 10.1016/j.compeleceng.2024.109370

 

  1. Muhammad D, Bendechache M. Unveiling the black box: A systematic review of explainable artificial intelligence in medical image analysis. Comput Struct Biotechnol J. 2024;24:542–560. doi: 10.1016/j.csbj.2024.08.005

 

  1. Allen B. The promise of explainable AI in digital health for precision medicine: A systematic review. J Pers Med. 2024;14(3):277. doi: 10.3390/jpm14030277

 

  1. Bhuyan BP, Ramdane-Cherif A, Tomar R, et al. Neuro- symbolic artificial intelligence: a survey. Neural Comput Appl. 2024;36:12809–12844. doi: 10.1007/s00521-024-09960-z

 

  1. Hitzler P, Eberhart A, Sarker MK, Zhou L. Neuro- symbolic approaches in artificial intelligence. Natl Sci Rev. 2022;9(6):nwac035. doi: 10.1093/nsr/nwac035

 

  1. Díaz-Rodríguez N, Lamas A, Sánchez J, et al. Xplainable neural-symbolic learning (X-NeSyL) methodology to fuse deep learning representations with expert knowledge graphs: the MonuMAI cultural heritage use case. Inf Fusion. 2022;79:58–83. doi: 10.1016/j.inffus.2021.09.022

 

  1. Markus AF, Kors JA, Rijnbeek PR. The role of explainability creating trustworthy artificial intelligence for health care: A comprehensive survey of the terminology, design choices, and evaluation strategies. J Biomed Inform. 2021;113:103655. doi: 10.1016/j.jbi.2020.103655

 

  1. Ploug T, Sundby A, Moeslund TB, Holm S. Population Preferences for Performance and Explainability of Artificial Intelligence in Health Care: Choice-Based Conjoint Survey. J Med Internet Res. 2021;23(12):e26611. doi: 10.2196/26611

 

  1. DeGrave AJ, Janizek JD, Lee S-I. AI for radiographic COVID-19 detection selects shortcuts over signal. Nat Mach Intell. 2021;3(7):610–619. doi: 10.1038/s42256-021-00338-7

 

 

  1. Okada Y, Ning Y, Ong MEH. Explainable AI in emergency medicine: an overview. Clin Exp Emerg Med. 2023;10(4):354– 362. doi: 10.15441/ceem.23.145

 

  1. Gunning D, Stefik M, Choi J, Miller T, Stumpf S, Yang GZ. XAI—Explainable artificial intelligence. Sci Robot. 2019;4(37). doi: 10.1126/scirobotics.aay7120

 

 

  1. Guidotti R, Monreale A, Ruggieri S, Turini F, Giannotti F, Pedreschi D. A survey of methods for explaining black box models. ACM Comput Surv. 2018;51(5):1-42. doi: 10.1145/3236009

 

  1. Arrieta AB, Díaz-Rodríguez N, Del Ser J, et al. Explainable artificial intelligence (XAI): concepts, taxonomies, opportunities and challenges toward responsible AI. Inf Fusion. 2020;58:82–115. doi: 10.1016/j.inffus.2019.12.012

 

  1. Lipton ZC. The mythos of model interpretability. Queue. 2018;16(3):31–57. doi: 10.1145/3236386.3241340

 

  1. Montavon G, Samek W, Müller KR. Methods for interpreting and understanding deep neural networks. Digit Signal Process. 2018;73:1–15. doi: 10.1016/j.dsp.2017.10.011

 

  1. Bienefeld N, Boss JM, Lüthy R, et al. Solving the explainable AI conundrum by bridging clinicians’ needs and developers’ goals. NPJ Digit Med. 2023;6(94):1-7. doi: 10.1038/s41746-023-00837-4

 

  1. Nori H, Jenkins S, Koch P, Caruana R. InterpretML: A Unified Framework for Machine Learning Interpretability. arXiv. Preprint posted online 2019. doi: 10.48550/arXiv.1909.09223

 

  1. Chen C, Li O, Tao C, Barnett AJ, Su J, Rudin C. This Looks Like That: Deep Learning for Interpretable Image Recognition. arXiv. Preprint posted online 2018. doi: 10.48550/arXiv.1806.10574

 

  1. Mothilal RK, Sharma A, Tan C. Explaining machine learning classifiers through diverse counterfactual explanations. In: Proceedings of the ACM Conf Fairness, Accountability, and Transparency (FAccT). 2020:607–617. doi: 10.1145/3351095.3372850

 

  1. Kokhlikyan N, Miglani V, Martin M, et al. Captum: A unified and generic model interpretability library for PyTorch. arXiv. Preprint posted online 2020. doi: 10.48550/arXiv.2009.07896

 

  1. Kim B, Wattenberg M, Gilmer J, et al. Interpretability Beyond Feature Attribution: Quantitative Testing with Concept Activation Vectors (TCAV). arXiv. Preprint posted online 2017. doi: 10.48550/arXiv.1711.11279

 

  1. Koh PW, Nguyen T, Tang YS, et al. Concept Bottleneck Models. arXiv. Preprint posted online 2020. doi: 10.48550/arXiv.2007.04612

 

  1. Caterson J, Lewin A, Williamson E. The application of explainable artificial intelligence (XAI) in electronic health record research: A scoping review. Digital Health. 2024;10:20552076241272657.doi: 10.1177/20552076241272657

 

  1. Guo C, Pleiss G, Sun Y, Weinberger KQ. On Calibration of Modern Neural Networks. arXiv. Preprint posted online 2017. doi: 10.48550/arXiv.1706.04599

 

  1. Ehsan U, Riedl MO. Explainability pitfalls: Beyond dark patterns in explainable AI. Patterns. 2024;5(7):100971. doi: 10.1016/j.patter.2024.100971

 

  1. Weber RO, Johs AJ, Goel P, Marques-Silva J. XAI is in trouble. AI Mag. 2024;45(3):300–316. doi: 10.1002/aaai.12184

 

  1. Balendran A, Beji C, Bouvier F, et al. A scoping review of robustness concepts for machine learning in healthcare. NPJ Digit Med. 2025;8:38. doi: 10.1038/s41746-024-01420-1

 

  1. Ghassemi M, Oakden-Rayner L, Beam AL. The false hope of current approaches to explainable AI in health care. Lancet Digit Health. 2021;3(11):e745–e750. doi: 10.1016/S2589-7500(21)00208-9

 

  1. Sundararajan M, Taly A, Yan Q. Axiomatic Attribution for Deep Networks. arXiv. Preprint posted online 2017. doi: 10.48550/arXiv.1703.01365

 

  1. Shrikumar A, Greenside P, Kundaje A. Learning Important Features Through Propagating Activation Differences. arXiv. Preprint posted online 2017. doi: 10.48550/arXiv.1704.02685

 

  1. Bach S, Binder A, Montavon G, Klauschen F, Müller KR, Samek W. On pixel-wise explanations for non-linear classifier decisions by layer-wise relevance propagation. PLoS One. 2015;10(7):e0130140. doi: 10.1371/journal.pone.0130140

 

  1. Selvaraju RR, Cogswell M, Das A, Vedantam R, Parikh D, Batra D. Grad-CAM: visual explanations from deep networks via gradient-based localization. In: Proceedings of the IEEE Int Conf Computer Vision (ICCV). 2017:618–626. doi: 10.1109/ICCV.2017.74

 

  1. De Mello BH, Rigo SJ, da Costa CA, et al. Semantic interoperability in health records standards: a systematic literature review. Health Technol (Berl). 2022;12(2):255–272. doi: 10.1007/s12553-022-00639-w

 

  1. Torab-Miandoab A, Samad-Soltani T, Jodati A, Rezaei-Hachesu P. Interoperability of heterogeneous health information systems: a systematic literature review. BMC Med Inform Decis Mak. 2023;23(1):18. doi: 10.1186/s12911-023-02115-5

 

  1. Mindlin D, Beer F, Sieger LN, et al. Beyond one-shot explanations: A systematic literature review of dialoguebased xAI approaches. Artif Intell Rev. 2025;58:81. doi: 10.1007/s10462-024-11007-7

 

  1. Panigutti C, Perotti A, Pedreschi D. Co-design of Humancentered, Explainable AI for Clinical Decision Support. ACM Trans Interact Intell Sys. 2023;13(4):1-35. doi: 10.1145/3587271

 

  1. Kim J, Maathuis H, Sent D. Human-centered evaluation of explainable AI applications: A systematic review. Front Artif Intell. 2024;7:1456486. doi: 10.3389/frai.2024.1456486

 

  1. Gerdes A. The role of explainability in AI-supported medical decision-making. Dis Artif Intell. 2024;4:29. doi: 10.1007/s44163-024-00119-2

 

  1. Saporta A, Gui X, Agrawal A, et al. Benchmarking saliency methods for chest X-ray interpretation. Nat Mach Intell. 2022;4:867–878. doi: 10.1038/s42256-022-00536-x

 

  1. Cerekci EA, Alis D, Denizoğlu N, et al. Quantitative evaluation of Saliency-Based Explainable artificial intelligence (XAI) methods in Deep Learning-Based mammogram analysis. Eur J Radiol. 2024;173:111356. doi: 10.1016/j.ejrad.2024.111356

 

  1. Gan Y, Mao Y, Zhang X, et al. “Is your explanation stable?”: A Robustness Evaluation Framework for Feature Attribution. In: Proceedings of the 2022 ACM SIGSAC Conference on Computer and Communications Security (CCS ’22). 2022:1157-1171. doi: 10.1145/3548606.3559392

 

  1. Kelodjou G, Rozé L, Masson V, et al. Shaping Up SHAP: Enhancing Stability through Layer Wise Neighbor Selection. In: Proceedings of the AAAI Conference on Artificial Intelligence. 2024;38(12):13094-13103. doi: 10.1609/aaai.v38i12.29208

 

  1. Goethals S, Sörensen K, Martens D. The Privacy Issue of Counterfactual Explanations: Explanation Linkage Attacks. ACM Trans Intell Syst Technol. 2023;14(5):1-24. doi: 10.1145/3608482

 

  1. An S, Zhang Y, Wang Y, et al. Counterfactual Explanation at Will, with Zero Privacy. In: Proceedings of the ACM on Management of Data. 2024;2(3):1-29. doi: 10.1145/3654933

 

  1. Smirnov A, Ponomarev A, Agafonov A. Ontology-based neuro-symbolic AI: Effects on prediction quality and explainability. IEEE Access. 2024;12:156609–156626. doi: 10.1109/ACCESS.2024.3485185

 

  1. Hassan M, Borycki EM, Kushniruk AW. Artificial intelligence governance framework for healthcare. Healthc Manage Forum. 2025;38(2):125–130. doi: 10.1177/08404704241291226

 

  1. Doshi-Velez F, Kim B. Towards A Rigorous Science of Interpretable Machine Learning. arXiv. Preprint posted online 2017. doi: 10.48550/arXiv.1702.08608

 

  1. Vasey B, Nagendran M, Campbell B, et al. Reporting guideline for the early-stage clinical evaluation of decision support systems driven by artificial intelligence: DECIDE-AI. Nat Med. 2022;28(5):924-933. doi: 10.1038/s41591-022-01772-9

 

  1. Mitchell M, Wu S, Zaldivar A, et al. Model cards for model reporting. In: Proceedings of the ACM Conf Fairness, Accountability, and Transparency (FAccT). 2019:220–229. doi: 10.1145/3287560.3287596

 

  1. Vasey B, Nagendran M, Campbell B, et al. Reporting guideline for the early stage clinical evaluation of decision support systems driven by artificial intelligence: DECIDE-AI. BMJ.2022;377:e070904. doi: 10.1136/bmj-2022-070904

 

  1. International Organization for Standardization (ISO). ISO 14971:2019—Medical devices—Application of risk management to medical devices. Geneva: ISO; 2019. Available from: https://www.iso.org/standard/72704.html [Last accessed on 2025 Sept 12].

 

  1. International Electrotechnical Commission (IEC). IEC 62366-1:2015+A1:2020—Medical devices—Part 1: Application of usability engineering to medical devices. Geneva: IEC; 2015/2020. Available from: https://webstore.iec.ch/publication/59980 [Last accessed on 2025 Sept 13].

 

  1. International Medical Device Regulators Forum (IMDRF). Software as a Medical Device (SaMD): clinical evaluation. IMDRF/SaMD WG/N41FINAL:2017; 2017. Available from: https://www.imdrf.org/documents/software-medicaldevice-samd-clinical-evaluation [Last accessed on 2025 Sept 13].
Share
Back to top
Advanced SmartHealth, Published by AccScience Publishing