Healthc Inform Res.  2023 Oct;29(4):315-322. 10.4258/hir.2023.29.4.315.

Requirements for Trustworthy Artificial Intelligence and its Application in Healthcare

Affiliations
  • 1Healthcare Innovation Park, Center for Artificial Intelligence in Healthcare, Seoul National University Bundang Hospital, Seongnam, Korea
  • 2Department of Internal Medicine, Seoul National University Bundang Hospital, Seongnam, Korea
  • 3Department of Internal Medicine, Seoul National University College of Medicine, Seoul, Korea

Abstract


Objectives
Artificial intelligence (AI) technologies are developing very rapidly in the medical field, but have yet to be actively used in actual clinical settings. Ensuring reliability is essential to disseminating technologies, necessitating a wide range of research and subsequent social consensus on requirements for trustworthy AI.
Methods
This review divided the requirements for trustworthy medical AI into explainability, fairness, privacy protection, and robustness, investigated research trends in the literature on AI in healthcare, and explored the criteria for trustworthy AI in the medical field.
Results
Explainability provides a basis for determining whether healthcare providers would refer to the output of an AI model, which requires the further development of explainable AI technology, evaluation methods, and user interfaces. For AI fairness, the primary task is to identify evaluation metrics optimized for the medical field. As for privacy and robustness, further development of technologies is needed, especially in defending training data or AI algorithms against adversarial attacks.
Conclusions
In the future, detailed standards need to be established according to the issues that medical AI would solve or the clinical field where medical AI would be used. Furthermore, these criteria should be reflected in AI-related regulations, such as AI development guidelines and approval processes for medical devices.

Keyword

Artificial Intelligence, Machine Learning, Healthcare Disparity, Trust, Guideline

Figure

  • Figure 1 Conflicts between measures of collective fairness.


Reference

References

1. Jiang F, Jiang Y, Zhi H, Dong Y, Li H, Ma S, et al. Artificial intelligence in healthcare: past, present and future. Stroke Vasc Neurol. 2017; 2(4):230–43. https://doi.org/10.1136/svn-2017-000101.
Article
2. Davenport T, Kalakota R.The potential for artificial intelligence in healthcare. Future Healthc J. 2019; 6(2):94–8. https://doi.org/10.7861/futurehosp.6-2-94.
Article
3. Jeong HG, Kim BJ, Kim T, Kang J, Kim JY, Kim J, et al. Classification of cardioembolic stroke based on a deep neural network using chest radiographs. EBioMedicine. 2021; 69:103466. https://doi.org/10.1016/j.ebiom.2021.103466.
Article
4. Kim K, Yang H, Yi J, Son HE, Ryu JY, Kim YC, et al. Real-time clinical decision support based on recurrent neural networks for in-hospital acute kidney injury: external validation and model interpretation. J Med Internet Res. 2021; 23(4):e24120. https://doi.org/10.2196/24120.
Article
5. Tidjon LN, Khomh F. Never trust, always verify: a roadmap for Trustworthy AI? [Internet]. Ithaca (NY): arXiv. org;2022. [cited at 2023 Oct 31]. Available from: https://arxiv.org/abs/2206.11981.
6. Neff G.Talking to bots: symbiotic agency and the case of Tay. Int J Commun. 2016; 10:4915–31.
7. Choi SS, Hong AR.Identifying issue changes of AI Chatbot ‘Iruda’ case and its implications. Electron Telecommun Trends. 2021; 36(2):93–101. https://doi.org/10.22648/ETRI.2021.J.360210.
Article
8. Jaspers MW, Smeulers M, Vermeulen H, Peute LW.Effects of clinical decision-support systems on practitioner performance and patient outcomes: a synthesis of high-quality systematic review findings. J Am Med Inform Assoc. 2011; 18(3):327–34. https://doi.org/10.1136/amiajnl-2011-000094.
Article
9. Graham KC, Cvach M.Monitor alarm fatigue: standardizing use of physiological monitors and decreasing nuisance alarms. Am J Crit Care. 2010; 19(1):28–34. https://doi.org/10.4037/ajcc2010651.
Article
10. Arrieta AB, Diaz-Rodriguez N, Del Ser J, Bennetot A, Tabik S, Barbado A, et al. Explainable artificial intelligence (XAI): concepts, taxonomies, opportunities and challenges toward responsible AI. Inf Fusion. 2020; 58:82–115. https://doi.org/10.1016/j.inffus.2019.12.012.
Article
11. Antoniadi AM, Du Y, Guendouz Y, Wei L, Mazo C, Becker BA, et al. Current challenges and future opportunities for XAI in machine learning-based clinical decision support systems: a systematic review. Appl Sci. 2021; 11(11):5088. https://doi.org/10.3390/app11115088.
Article
12. Han HJ. Trends in explainable artificial intelligence (XAI) research in the medical/healthcare domain [Internet]. Pohang, Korea: BRIC View;2021. [cited at 2023 Oct 31]. Available from: https://www.ibric.org/myboard/read.php?Board=report&id=3751.
13. Das A, Rad P. Opportunities and challenges in explainable artificial intelligence (XAI): a survey [Internet]. Ithaca (NY): arXiv.org;2020. [cited at 2023 Oct 31]. Available from: https://arxiv.org/abs/2006.11371.
14. van der Veer SN, Riste L, Cheraghi-Sohi S, Phipps DL, Tully MP, Bozentko K, et al. Trading off accuracy and explainability in AI decision-making: findings from 2 citizens’ juries. J Am Med Inform Assoc. 2021; 28(10):2128–38. https://doi.org/10.1093/jamia/ocab127.
Article
15. Chang J, Lee J, Ha A, Han YS, Bak E, Choi S, et al. Explaining the rationale of deep learning glaucoma decisions with adversarial examples. Ophthalmology. 2021; 128(1):78–88. https://doi.org/10.1016/j.ophtha.2020.06.036.
Article
16. Moosavi-Dezfooli SM, Fawzi A, Frossard P. DeepFool: a simple and accurate method to fool deep neural networks. In : Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition; 2016 Jun 27–30; Las Vegas, NV. p. 2574–82. https://doi.org/10.1109/CVPR.2016.282.
Article
17. Goodfellow IJ, Shlens J, Szegedy C. Explaining and harnessing adversarial examples [Internet]. Ithaca (NY): arXiv.org;2014. [cited at 2023 Oct 31]. Available from: https://arxiv.org/abs/1412.6572.
18. Papernot N, McDaniel P, Jha S, Fredrikson M, Celik ZB, Swami A. The limitations of deep learning in adversarial settings. In : Proceedings of 2016 IEEE European Symposium on Security and Privacy (EuroS&P); 2016 Mar 21–24; Saarbruecken, Germany. p. 372–87. https://doi.org/10.1109/EuroSP.2016.36.
Article
19. Linardatos P, Papastefanopoulos V, Kotsiantis S.Explainable AI: a review of machine learning interpretability methods. Entropy. 2020; 23(1):18. https://doi.org/10.3390/e23010018.
Article
20. Chromik M, Butz A. Human-XAI interaction: a review and design principles for explanation user interfaces. Ardito C, Lanzilotti R, Malizia A, editors. Human-computer interaction–INTERACT 2021. Cham, Switzerland: Springer;2021. p. 619–40. https://doi.org/10.1007/978-3-030-85616-8_36.
Article
21. Grgic-Hlaca N, Lima G, Weller A, Redmiles EM. Dimensions of diversity in human perceptions of algorithmic fairness [Internet]. Ithaca (NY): arXiv.org;2022. [cited at 2023 Oct 31]. Available from: https://arxiv.org/abs/2005.00808.
22. Baniecki H, Kretowicz W, Piatyszek P, Wisniewski J, Biecek P.Dalex: responsible machine learning with interactive explainability and fairness in Python. J Mach Learn Res. 2021; 22(1):9759–65.
23. Paulus JK, Kent DM.Predictably unequal: understanding and addressing concerns that algorithmic clinical prediction may increase health disparities. NPJ Digit Med. 2020; 3:99. https://doi.org/10.1038/s41746-020-0304-9.
Article
24. Franck TM. Fairness in international law and institutions. Oxford, UK: Oxford University Press;1998. https://doi.org/10.1093/acprof:oso/9780198267850.001.0001.
Article
25. Rawls J. Justice as fairness: political not metaphysical. Corlett JA, editor. Equality and liberty: analyzing Rawls and Nozick. London, UK: Palgrave Macmillan;1991. p. 145–73. https://doi.org/10.1007/978-1-349-21763-2_10.
Article
26. Vidmar N.The origins and consequences of procedural fairness. Law Soc Inq. 1990; 15(4):877–92. https://doi.org/10.1111/j.1747-4469.1990.tb00607.x.
Article
27. Park HM, Kim SH.The multi-dimensionality of theories of justice. Soc Theory. 2015; 27(2):219–60. https://doi.org/10.17209/st.2015.11.27.219.
Article
28. Xu J, Xiao Y, Wang WH, Ning Y, Shenkman EA, Bian J, Wang F.Algorithmic fairness in computational medicine. EBioMedicine. 2022; 84:104250. https://doi.org/10.1016/j.ebiom.2022.104250.
Article
29. Awasthi P, Beutel A, Kleindessner M, Morgenstern J, Wang X. Evaluating fairness of machine learning models under uncertain and incomplete information. In : Proceedings of the 2021 ACM Conference on Fairness, Accountability, and Transparency; 2021 Mar 3–10; Virtual Event, Canada. p. 206–14. https://doi.org/10.1145/3442188.3445884.
Article
30. Hinnefeld JH, Cooman P, Mammo N, Deese R. Evaluating fairness metrics in the presence of dataset bias [Internet]. Ithaca (NY): arXiv.org;2018. [cited at 2023 Oct 31]. Available from: https://arxiv.org/abs/1809.09245.
31. Madaio M, Egede L, Subramonyam H, Wortman Vaughan J, Wallach H.Assessing the fairness of AI systems: ai practitioners’ processes, challenges, and needs for support. Proc ACM Hum Comput Interact. 2022; 6(CSCW1):1–26. https://doi.org/10.1145/3512899.
Article
32. Hardt M, Price E, Srebro N.Equality of opportunity in supervised learning. Adv Neural Inf Process Syst. 2016; 29:3315–23.
33. Srivastava M, Heidari H, Krause A. Mathematical notions vs. human perception of fairness: a descriptive approach to fairness for machine learning. In : Proceedings of the 25th ACM SIGKDD International Conference on Knowledge Discovery & Data Mining; ; 2019 Aug 4–8; Anchorage, AK. p. 2459–68. https://doi.org/10.1145/3292500.3330664.
Article
34. Saravanakumar KK. The impossibility theorem of machine fairness: a causal perspective [Internet]. Ithaca (NY): arXiv.org;2020. [cited at 2023 Oct 31]. Available from: https://arxiv.org/abs/2007.06024.
35. Dwork C, Ilvento C. Fairness under composition [Internet]. Ithaca (NY): arXiv.org;2018. [cited at 2023 Oct 31]. Available from: https://arxiv.org/abs/1806.06122.
36. Binns R. On the apparent conflict between individual and group fairness. In : Proceedings of the 2020 Conference on Fairness, Accountability, And Transparency; 2020 Jan 27–30; Barcelona, Spain. 514–24. https://doi.org/10.1145/3351095.3372864.
Article
37. Meng C, Trinh L, Xu N, Enouen J, Liu Y.Interpretability and fairness evaluation of deep learning models on MIMIC-IV dataset. Sci Rep. 2022; 12(1):7166. https://doi.org/10.1038/s41598-022-11012-2.
Article
38. Garriga R, Mas J, Abraha S, Nolan J, Harrison O, Tadros G, Matic A.Machine learning model to predict mental health crises from electronic health records. Nat Med. 2022; 28(6):1240–8. https://doi.org/10.1038/s41591-022-01811-5.
Article
39. Trewin S, Basson S, Muller M, Branham S, Treviranus J, Gruen D, et al. Considerations for AI fairness for people with disabilities. AI Matters. 2019; 5(3):40–63. https://doi.org/10.1145/3362077.3362086.
Article
40. Huq AZ.Racial equity in algorithmic criminal justice. Duke Law J. 2019; 68(6):1043.
41. Hu L, Kohler-Hausmann I. What’s sex got to do with fair machine learning? [Internet]. Ithaca (NY): arXiv. org;2020. [cited at 2023 Oct 31]. Available from: . https://arxiv.org/abs/2006.01770.
42. Chohlas-Wood A, Nudell J, Yao K, Lin Z, Nyarko J, Goel S. Blind justice: algorithmically masking race in charging decisions. In : Proceedings of the 2021 AAAI/ACM Conference on AI, Ethics, and Society; 2021 May 19–21; Virtual Event, USA. p. 35–45. https://doi.org/10.1145/3461702.3462524.
Article
43. Park Y, Hu J, Singh M, Sylla I, Dankwa-Mullan I, Koski E, Das AK.Comparison of methods to reduce bias from clinical prediction models of postpartum depression. JAMA Netw Open. 2021; 4(4):e213909. https://doi.org/10.1001/jamanetworkopen.2021.3909.
Article
44. Meingast M, Roosta T, Sastry S.Security and privacy issues with health care information technology. Conf Proc IEEE Eng Med Biol Soc. 2006; 2006:5453–8. https://doi.org/10.1109/IEMBS.2006.260060.
Article
45. Rajpurkar P, Chen E, Banerjee O, Topol EJ.AI in health and medicine. Nat Med. 2022; 28(1):31–8. https://doi.org/10.1038/s41591-021-01614-0.
Article
46. Bartoletti I. AI in healthcare: ethical and privacy challenges. Riano D, Wilk S, ten Teije A, editors. Artificial intelligence in medicine. Cham, Switzerland: Springer;2019. 7–10. https://doi.org/10.1007/978-3-030-21642-9_2.
Article
47. Cavoukian A. Privacy by design [Internet]. Toronto, Canada: Information and Privacy Commissioner of Ontario;2010. [cited at 2023 Oct 31]. Available from: https://privacysecurityacademy.com/wp-content/uploads/2020/08/PbD-Principles-and-Mapping.pdf.
48. Personal Information Protection Commission. Artificial intelligence (AI) personal information self-checklist [Internet]. Seoul, Korea: Personal Information Protection Commission;2021. [cited at 2023 Oct 31]. Available from: https://www.korea.kr/common/download.do?fileId=197266311&tblKey=GMN.
49. Scheibner J, Raisaro JL, Troncoso-Pastoriza JR, Ienca M, Fellay J, Vayena E, et al. Revolutionizing medical data sharing using advanced privacy-enhancing technologies: technical, legal, and ethical synthesis. J Med Internet Res. 2021; 23(2):e25120. https://doi.org/10.2196/25120.
Article
50. Bai T, Luo J, Zhao J, Wen B, Wang Q. Recent advances in adversarial training for adversarial robustness [Internet]. Ithaca (NY): arXiv.org;2021. [cited at 2023 Oct 31]. Available from: https://arxiv.org/abs/2102.01356.
51. Qiu S, Liu Q, Zhou S, Wu C.Review of artificial intelligence adversarial attack and defense technologies. Appl Sci. 2019; 9(5):909. https://doi.org/10.3390/app9050909.
Article
52. Finlayson SG, Bowers JD, Ito J, Zittrain JL, Beam AL, Kohane IS.Adversarial attacks on medical machine learning. Science. 2019; 363(6433):1287–9. https://doi.org/10.1126/science.aaw4399.
Article
53. Taghanaki SA, Das A, Hamarneh G. Vulnerability analysis of chest X-ray image classification against adversarial attacks. Stoyanov D, Taylor Z, Kia SM, editors. Understanding and interpreting machine learning in medical image computing applications. Cham, Switzerland: Springer;2018. 87–94. https://doi.org/10.1007/978-3-030-02628-8_10.
Article
Full Text Links
  • HIR
Actions
Cited
CITED
export Copy
Close
Share
  • Twitter
  • Facebook
Similar articles
Copyright © 2024 by Korean Association of Medical Journal Editors. All rights reserved.     E-mail: koreamed@kamje.or.kr