Translate this page into:
Debating artificial intelligence in patients’ care: A double-edged sword

*Corresponding author: Thuraiya Al Harthi, The Royal Hospital, Muscat, Oman. thuraiya.alharthi@moh.gov.om
-
Received: ,
Accepted: ,
How to cite this article: Al Harthi T, Al Fannah J, Al Dahri R, Al Adawi M, Al Araimi R, Al Shehhi WA, et al. Debating artificial intelligence in patients’ care: A double-edged sword. World Adv Renal Med. 2025;1:97-101. doi: 10.25259/WARM_18_2025
Abstract
The incorporation of artificial intelligence (AI) into patient diagnosis and management is a topic of significant debate, presenting both groundbreaking opportunities and substantial challenges. This manuscript summarizes the proceedings of a debate held at the Royal Hospital, a central tertiary care facility in Oman, exploring the multifaceted impact of AI on healthcare, specifically patient care. The proponents highlighted AI’s potential to enhance diagnostic accuracy, personalize treatment plans, and improve overall healthcare efficiency. Conversely, the opponents raised critical concerns about ethical issues, patient privacy, potential errors, and the risk of job displacement among healthcare professionals. Through a balanced critique of both perspectives, this discussion aims to provide a comprehensive understanding of the benefits and drawbacks of AI in patient care, emphasizing the importance of careful implementation, ethical considerations, and ongoing research to ensure that AI technology augments, rather than compromises, the quality of healthcare delivery.
Keywords
Artificial intelligence
Ethical considerations
Healthcare technology
Healthcare delivery
Patient care
INTRODUCTION
Artificial intelligence (AI) is rapidly transforming various industries, and the healthcare sector is no exception. AI involves a broad spectrum of technologies, including machine learning (ML), natural language processing, and robotics, which can analyze complex medical data, assist in diagnostics, and optimize treatment plans. The integration of AI into patient care promises to revolutionize healthcare delivery by enhancing precision, efficiency, and personalization.[1]
AI’s potential in healthcare is widely significant. For instance, AI algorithms have been developed to assist in diagnosing diseases such as cancer, where ML models can analyze imaging data to detect anomalies with a high degree of accuracy.[2] In addition, AI can personalize treatment plans by analyzing patient-specific data, thereby tailoring therapies to individual genetic profiles and improving treatment outcomes.[3]
Despite its potential benefits, the use of AI in patient care raises several ethical and practical concerns. Issues such as patient privacy, data security, and the potential for algorithmic bias are critical considerations. There is also apprehension about the accountability and transparency of AI systems, especially when they operate autonomously or make critical decisions.[4] Moreover, the introduction of AI could disrupt the healthcare workforce, leading to job displacement, altering the roles of healthcare professionals, and extreme dependence on technology.[5]
Given these opportunities and challenges, a balanced debate on the role of AI in patient care was considered essential. This debate aimed to explore the benefits and drawbacks of AI integration in healthcare, considering both current evidence and future implications. By examining both sides of the argument, we hope to provide a broad perspective that can guide policymakers, healthcare providers, and researchers in making informed decisions about the use of AI in patient care.
DEBATE PROCEEDINGS
Arguments raised by AI advocates
The application of AI in the healthcare industry has become highly valuable and promising, and could assist in mitigating various struggles and challenges faced by healthcare systems.[6] In the era of clinical diagnostics, a systematic review and meta-analysis of 14 studies showed a higher diagnostic performance of AI compared to healthcare providers using imaging in which reported pooled sensitivity was 87.0% (95% confidence interval [CI] : 83.0%-90.2%) for deep learning models and 86.4% (95% CI : 79.9%-91.0%) for healthcare professionals, along with a pooled specificity of 92.5% (95% CI : 85.1%-96.4%) for deep learning models and 90.5% (95% CI: 80.6%-95.7%) for healthcare professionals.[7] In fact, you can find many examples of this in real life. A very specific example is the detection of tumors in radiology scans. Tumor cells can be quickly and safely detected by scans and imaging modalities such as computed tomography and magnetic resonance imaging, and others. Further, ML and AI have shown promise in developing the automated detection mechanism through various modalities.[8] In the field of improving healthcare services, AI can be used to assist in scheduling appointments, offer mental health support, provide remote monitoring, and reply to patients’ enquiries.[9,10] AI can be beneficial for the entire healthcare community, including doctors, clinicians, nurses, administrative staff, and managers, in assisting with and accomplishing various tasks, ranging from simple tasks such as discharge summary documentation to performing sophisticated surgeries.
For those concerned about overriding the individuality of healthcare providers, scientific evidence emphasizes a cultural shift to view AI as a healthcare delivery enhancer and job creator, rather than a threat.[11] Rather, AI will allow providers, clinicians, and staff to focus on more top-of-license skill sets and activities. Based on the current statistics, by 2030, the world will have a deficit of 18 million healthcare professionals.[12,13] We hold the view that AI amplifies and augments, rather than replaces, human intelligence.
AI-powered electronic health records (AI-powered EHR) is another feature that has the potential to optimize healthcare workflow through the utilization of innovations such as speech recognition, automated scheduling, and creation of accurate medical records. In one observational study, a group of medical residents was found to spend an average of 1.7 h only directly with patients during their shift, and 5.2 h using computers.[14] This alarming observation was evident in several other studies and is reported as a frequent contributor to physician burnout.[15,16] AI innovations can potentially empower healthcare workers to overcome EHR systems’ inefficiencies and provide more direct patient care with a deeper level of attention to detail.
For those with contemporary ethical concerns in the use of AI, frameworks are being introduced as one type of ethics tool, among others, such as the human-centered AI canvas.[17] In fact, there is published evidence as a regulation for the use of AI to ensure and sustain patient safety and ethical consideration.[18-21] The European Union took a regulatory stand by publishing the European Medical Device Regulation. They have posted the AI Act to organize AI products and services from the development phase to their application with respect to risk management, data governance, human oversight, transparency, accuracy, robustness, and cybersecurity.[21] In 2019, documents were published by the National Institute for Health and Care Excellence in collaboration with the National Health Service on the Evidence Standards Framework for Digital Health Technologies.[22]
Given the compelling evidence supporting the efficacy and safety of AI in patient care, the integration of AI technologies holds significant promise for enhancing healthcare outcomes. Therefore, it is essential to advocate for the adoption and utilization of AI in patient care to realize its potential benefits fully.
Arguments raised by AI opponents
Despite the potential benefits of AI in improving medical outcomes, the lack of transparency in AI algorithms poses significant challenges to their adoption and trustworthiness. AI in healthcare raises significant ethical concerns related to data privacy and security. Sensitive health data can be vulnerable to breaches, and current regulatory frameworks may not adequately address the complexities introduced by AI technologies.[23] In addition, AI systems can perpetuate and exacerbate existing biases present in their training data, leading to discriminatory practices.[24] This can result in unequal healthcare outcomes, particularly for minority and marginalized populations. The potential for hacking and misuse of sensitive patient information further exacerbates these concerns, emphasizing the need for data protection measures.[25]
A Finnish court sentenced 26-year-old hacker Aleksanteri Kivimäki to 6 years and 3 months in prison for hacking tens of thousands of patient records at Vastaamo psychotherapy center and demanding ransom from patients.[26] The 2020 data breach, which led to the filing of around 24,000 criminal complaints, revealed significant privacy violations and caused severe psychological harm, including suicides among some victims. Kivimäki, extradited from France in 2023, was found guilty of aggravated data breach, blackmail attempts, and dissemination of private information.[26]
Integrating AI into medical practice risks diminishing the essential human elements of empathy and compassion in patient care.[27,28] These human attributes are crucial for patient trust and effective treatment, particularly in fields such as psychiatry and pediatrics, where emotional support is vital.[28] AI cannot exhibit genuine empathy or provide emotional support, which are essential for holistic patient care. The human touch and voice remain irreplaceable in offering hope and assurance during treatment, highlighting the limitations of AI in replicating these critical aspects of healthcare.[29]
Furthermore, the widespread adoption of AI in healthcare could exacerbate social inequalities, as not all communities have equal access to these advanced technologies.[30] This could widen the gap between developed and developing countries and among different socio-economic groups within countries. AI’s ability to perform tasks traditionally done by healthcare professionals also raises concerns about mechanical failures and errors. AI systems, especially in robot-assisted surgeries, can experience malfunctions, leading to severe consequences such as accidental injuries or nerve damage.[31] Thus, while AI holds promise for enhancing healthcare, these significant ethical, practical, and social challenges must be carefully addressed.[32]
Google developed an AI algorithm for breast cancer screening, which demonstrated promising performance but lacked transparency in its development and technical details.[33-35] This lack of transparency raises concerns about the usefulness and safety of such AI tools, prompting calls for greater transparency in medical AI. Transparency in AI is described in terms of traceability and explainability. Traceability involves documenting the entire AI development process and monitoring its performance after deployment, while explainability refers to providing transparency for each AI prediction and decision. However, existing AI tools often lack full traceability and are described as “black box AI,” making it difficult for clinicians to understand their decision-making processes and identify errors.[36,37]
Other issues surrounding the use of AI in healthcare are deeply intertwined with concerns about informed consent, data privacy, and cybersecurity. Informed consent, a fundamental aspect of patient autonomy and ethical healthcare practice, has become increasingly complex in the digital age, where AI algorithms and opaque decision-making processes may limit patients’ understanding and control over their health data.[38,39]
Opaque AI algorithms and complicated consent forms hinder patient autonomy and shared decision-making, making it difficult for patients to comprehend how their data is used and to opt out of sharing it. This complexity is exemplified by the case of the Royal Free NHS Trust’s unauthorized transfer of patient records to DeepMind in 2016, which led to a breach of data protection laws and raised questions about the erosion of privacy rights in the pursuit of innovation.[40]
Moreover, the use of AI in healthcare introduces risks of data security breaches, as demonstrated by incidents such as the Cense AI data breach in 2020, which exposed sensitive patient information to the public. Data repurposing, or “function creep,” poses another concern, as health-related data may be used for non-healthcare purposes, compromising patient privacy and autonomy.[41-43] AI tools in healthcare are vulnerable to cyberattacks, with potentially dire consequences for patient safety. Incidents such as the cyberattack on Düsseldorf University Hospital in 2020, which resulted in a patient’s death, underscore the urgent need for robust cybersecurity measures to protect against such threats.[44] Similarly, ransomware attacks on healthcare systems, such as the Elekta attack in 2021, can disrupt patient care and compromise sensitive medical data.[45] Even personal medical devices controlled by AI, such as insulin pumps, are susceptible to hacking, raising concerns about the safety and integrity of AI-driven healthcare technologies.[44,45]
Moving forward
Given the complexity of integrating AI into patient care, a strategic and subtle approach to its implementation is essential. Policymakers, healthcare providers, and researchers must collaborate to develop clear guidelines that maximize AI’s potential benefits while addressing its ethical, safety, and operational challenges. A key concern is the phenomenon of AI hallucination – inaccuracies in AI-generated results – which can have serious consequences on patient care if left unchecked.[46] This accentuates the need for healthcare professionals to fully understand the risks posed by inaccurate information and to exercise critical judgment when applying AI in clinical contexts. Establishing a robust regulatory framework is essential, encompassing strong data protection measures, transparency, accountability, and inclusivity for all patient populations, alongside raising awareness of privacy risks.[47] The use of open-access AI tools heightens these risks, as the vulnerability of healthcare data to breaches increases substantially.[48] Continuous research is essential to refine AI technologies, enhance cybersecurity, and ensure their ethical deployment in healthcare. Investing in AI education for healthcare professionals is crucial for seamlessly integrating these technologies into clinical practice, enhancing their role alongside human expertise.
CONCLUSION
AI offers significant potential to enhance patient care, but it also introduces ethical and safety challenges. Balancing these benefits and risks requires transparent, evidence-based, and ethically guided implementation. Additionally, sustained evaluation and multidisciplinary collaboration are essential to ensure that patient welfare remains central to AI integration in healthcare.
Author contributions:
TAH and HAH conceptualized the study. All authors equally contributed to the writing and reviewing of the manuscript.
Ethical approval:
Institutional Review Board approval is not required.
Declaration of patient consent:
Patient’s consent not required as there are no patients in this study.
Conflicts of interest:
There are no conflicts of interest.
Use of artificial intelligence (AI)-assisted technology for manuscript preparation:
The authors confirm that there was no use of artificial intelligence (AI)-assisted technology for assisting in the writing or editing of the manuscript and no images were manipulated using AI.
Financial support and sponsorship: Nil.
References
- The second machine age: Work, progress, and prosperity in a time of brilliant technologies United States: W. W. Norton and Company; 2014. p. :15-19.
- [Google Scholar]
- Dermatologist-level classification of skin cancer with deep neural networks. Nature. 2017;542:115-8.
- [CrossRef] [PubMed] [Google Scholar]
- Artificial intelligence in healthcare: Past, present and future. Stroke Vasc Neurol. 2017;2:230-43.
- [CrossRef] [PubMed] [Google Scholar]
- The ethics of AI in health care: A mapping review. Soc Sci Med. 2020;260:113172.
- [CrossRef] [PubMed] [Google Scholar]
- Chapter 9. Deep medicine: How artificial intelligence can make healthcare human again (1st ed). New York: Basic Books; 2019.
- [Google Scholar]
- Artificial intelligence in healthcare: Transforming the practice of medicine. Future Healthc J. 2021;8:e188-e94.
- [CrossRef] [PubMed] [Google Scholar]
- A comparison of deep learning performance against health-care professionals in detecting diseases from medical imaging: A systematic review and meta-analysis. Lancet Digit Health. 2019;1:e271-97.
- [CrossRef] [PubMed] [Google Scholar]
- Artificial intelligence for medical diagnostics-existing and future AI technology! Diagnostics (Basel). 2023;13:688.
- [CrossRef] [PubMed] [Google Scholar]
- Ensuring patient and public involvement in the transition to AI-assisted mental health care: A systematic scoping review and agenda for design justice. Health Expect. 2021;24:1072-124.
- [CrossRef] [PubMed] [Google Scholar]
- AI in patient flow: Applications of artificial intelligence to improve patient flow in NHS acute mental health inpatient units. Heliyon. 2021;7:e06993.
- [CrossRef] [PubMed] [Google Scholar]
- A review of recent advances in brain tumor diagnosis based on AI-based classification. Diagnostics (Basel). 2023;13:3007.
- [CrossRef] [PubMed] [Google Scholar]
- The rise of artificial intelligence in healthcare applications. Artif Intell Healthc 2020:25-60.
- [CrossRef] [Google Scholar]
- World will lack 18 million health workers by 2030 without adequate investment, warns UN. BMJ. 2016;354:i5169.
- [CrossRef] [PubMed] [Google Scholar]
- Sleep and alertness in medical interns and residents: An observational study on the role of extended shifts. Sleep. 2017;40:zsx027.
- [CrossRef] [Google Scholar]
- Allocation of internal medicine resident time in a Swiss hospital: A time and motion study of day and evening shifts. Ann Intern Med. 2017;166:579-86.
- [CrossRef] [PubMed] [Google Scholar]
- Allocation of physician time in ambulatory practice: A time and motion study in 4 specialties. Ann Intern Med. 2016;165:753-60.
- [CrossRef] [PubMed] [Google Scholar]
- The impact of electronic health records on time efficiency of physicians and nurses: A systematic review. J Am Med Inform Assoc. 2005;12:505-16.
- [CrossRef] [PubMed] [Google Scholar]
- From ethical AI frameworks to tools: A review of approaches. AI Ethics. 2023;3:699-716.
- [CrossRef] [Google Scholar]
- Ethically aligned design: A vision for prioritizing human well-being with autonomous and intelligent systems. 2019. IEEE. Version 2.1 Available from: https://ethicsinaction.ieee.org. [Last accessed 2025 July 28]
- [Google Scholar]
- Ethics and governance of artificial intelligence for health: WHO guidance. 2021. World Health Organization. Available from: https://iris.who.int/server/api/core/bitstreams/f780d926-4ae3-42ce-a6d6-e898a5562621/content. [Last accessed 2025 July 28]
- [Google Scholar]
- Policy for device software functions and mobile medical applications. 2019. FDA. Available from: https://www.fda.gov/media/80958/download [Last accessed 2025 July 28]
- [Google Scholar]
- Regulation (EU) 2017/745 of the European parliament and of the council of 5 April 2017 on medical devices. Offcial J Eur Union L. 2017;117:1-175.
- [Google Scholar]
- Evidence standards framework for digital health technologies. 2019. National Health Service. Available from: https://www.nice.org.uk/corporate/ecd7/resources/evidence-standards-framework-for-digital-health-technologies-pdf-1124017457605. [Last accessed 2025 July 28]
- [Google Scholar]
- Ethical issues of artificial intelligence in medicine and healthcare. Iran J Public Health. 2021;50:i-v.
- [CrossRef] [Google Scholar]
- A review of the role of artificial intelligence in healthcare. J Pers Med. 2023;13:951.
- [CrossRef] [PubMed] [Google Scholar]
- Drawbacks of artificial intelligence and their potential solutions in the healthcare sector. Biomed Mater Devices. 2023;1:731-8.
- [CrossRef] [PubMed] [Google Scholar]
- Finnish hacker imprisoned for accessing thousands of psychotherapy records and demanding ransoms. 2024. Associated Press. Available from: https://apnews.com/article/finland/court/hacking/ransom/psychotherapy/center/b03183e5ca66a768d743f7e84b368829. [Last accessed 2025 July 28]
- [Google Scholar]
- The risks of AI in healthcare: Diminishing empathy and compassion. J Med Ethics. 2018;44:337-40.
- [Google Scholar]
- Revolutionizing healthcare: The role of artificial intelligence in clinical practice. BMC Med Educ. 2023;23:689.
- [CrossRef] [PubMed] [Google Scholar]
- The downsides of artificial intelligence in healthcare. Korean J Pain. 2024;37:87-8.
- [CrossRef] [PubMed] [Google Scholar]
- Adverse events in robotic surgery: A retrospective study of 14 years of FDA data. PLoS One. 2016;11:e0151470.
- [CrossRef] [PubMed] [Google Scholar]
- The role of humans in surgery automation. Exploring the influence of automation on human-robot interaction and responsibility in surgery innovation. Int J Soc Robotics. 2023;15:563-80.
- [CrossRef] [Google Scholar]
- Artificial intelligence: What is it? In: Proceedings of the 2020 6th international conference on computer technology applications. 2020. p. :22-5.
- [CrossRef] [Google Scholar]
- Transparency and reproducibility in artificial intelligence. Nature. 2020;586:E14-6.
- [CrossRef] [PubMed] [Google Scholar]
- International evaluation of an AI system for breast cancer screening. Nature. 2020;577:89-94.
- [CrossRef] [PubMed] [Google Scholar]
- Google's breast cancer-detecting AI is a black box. 2020. VentureBeat. Available from: https://venturebeat.com/2020/01/02/googles/breast-cancer-detecting-ai-is-a-black-box [Last accessed 2025 July 28]
- [Google Scholar]
- Explainability for artificial intelligence in healthcare: A multidisciplinary perspective. BMC Med Inform Decis Mak. 2020;20:310.
- [CrossRef] [PubMed] [Google Scholar]
- AI-assisted decision-making in healthcare: The application of an ethics framework for big data in health and research. Asian Bioethics Review. 2019;11:299-314.
- [CrossRef] [PubMed] [Google Scholar]
- Global considerations for informed consent with shared decision-making in the digital age. BMJ Evid Based Med. 2024;29:346-9.
- [CrossRef] [PubMed] [Google Scholar]
- Ethical and legal challenges of artificial intelligence-driven healthcare In: Artificial intelligence in healthcare. United States: Academic Press; 2020. p. :295-336.
- [CrossRef] [Google Scholar]
- Protecting privacy and transforming COVID-19 case surveillance datasets for public use. Public Health Rep. 2021;136:554-61.
- [CrossRef] [PubMed] [Google Scholar]
- FAIR, ethical, and coordinated data sharing for COVID-19 response: a scoping review and cross-sectional survey of COVID-19 data sharing platforms and registries. The Lancet Digital Health 2023. ;. ;5:e712-36.
- [CrossRef] [PubMed] [Google Scholar]
- Behavioral responses to a cyber attack in a hospital environment. Sci Rep. 2021;11:19352.
- [CrossRef] [PubMed] [Google Scholar]
- Ransomware attack on Elekta disrupts cancer care. 2021. Medscape. Available from: https://www.medscape.com/viewarticle/950801. [Last accessed 2025 July 28]
- [Google Scholar]
- Optimizing large language models in digestive disease: Strategies and challenges to improve clinical outcomes. Liver Int. 2024;44:2114-24.
- [CrossRef] [PubMed] [Google Scholar]
- Assessing chatGPT's mastery of bloom's taxonomy using psychosomatic medicine exam questions: Mixed-methods study. J Med Internet Res. 2024;26:e52113.
- [CrossRef] [PubMed] [Google Scholar]
- Protecting data privacy in the age of AI-enabled ophthalmology. Trans Vis Sci Tech. 2020;9:36.
- [CrossRef] [PubMed] [Google Scholar]

