The ethics of using artificial intelligence in medical research
Article information
Abstract
The integration of artificial intelligence (AI) technologies into medical research introduces significant ethical challenges that necessitate the strengthening of ethical frameworks. This review highlights the issues of privacy, bias, accountability, informed consent, and regulatory compliance as central concerns. AI systems, particularly in medical research, may compromise patient data privacy, perpetuate biases if they are trained on nondiverse datasets, and obscure accountability owing to their “black box” nature. Furthermore, the complexity of the role of AI may affect patients’ informed consent, as they may not fully grasp the extent of AI involvement in their care. Compliance with regulations such as the Health Insurance Portability and Accountability Act and General Data Protection Regulation is essential, as they address liability in cases of AI errors. This review advocates a balanced approach to AI autonomy in clinical decisions, the rigorous validation of AI systems, ongoing monitoring, and robust data governance. Engaging diverse stakeholders is crucial for aligning AI development with ethical norms and addressing practical clinical needs. Ultimately, the proactive management of AI’s ethical implications is vital to ensure that its integration into healthcare improves patient outcomes without compromising ethical integrity.
Introduction
As insights into the relationship between research and technology rapidly evolve, new nuances in ethical concerns have emerged [1]. Historically, ethics in medical research has primarily focused on the security and protection of human subjects. However, with the increasing use of advanced technologies in contemporary research, expanding our ethical considerations has become necessary [2]. This shift requires addressing additional dimensions to ensure enhanced ethical safeguards in human research.
Currently, the forefront of technology, particularly artificial intelligence (AI) and its specific applications such as ChatGPT (generative pre-trained transformer, OpenAI), is becoming increasingly relevant in discussions of ethical issues related to medical research [3,4]. These technologies present unique challenges that necessitate a re-evaluation of ethical frameworks to ensure they are adequately addressed. Furthermore, the acceptance of ethical issues in medical research varies across cultural backgrounds and generations of researchers [5-8]. This cultural and generational divide has influenced perspectives on medical ethics, leading to various manifestations in research practices concerning ethical issues.
However, researchers continue to face challenges in evaluating the performance of studies incorporating AI [9-11]. In this review, we examine articles that discuss the ethical issues arising in medical research involving such technologies. We aimed to reflect on these debates and propose potential resolutions (Fig. 1).
Methods
This review examines recent studies addressing ethical issues in medical research. Given the significant impact of these ethical considerations on the direction of medical research, we thoroughly analyzed the norms and strategies related to ethics accountability. Our methodology employed a holistic approach, encompassing all academic disciplines and levels, with a specific focus on sourcing evidence from medical institutions and hospitals. To identify the relevant articles, we meticulously searched for keywords and themes that include AI, ChatGPT, autonomy, privacy, confidentiality, accountability, fairness, regulatory compliance, informed consent, and liability across medical databases, including PubMed and Google Scholar. Our objective was to identify emerging trends, burgeoning areas of interest, such as AI, and the evolving focus on ethics in medical research over time.
Ethical challenges in medical AI
1. Privacy and confidentiality issues
The use of AI and representative applications such as ChatGPT in medical research raises several ethical concerns that need to be addressed [12]. Key among these is the privacy and confidentiality of patient data [13-16]. Ensuring that the patient information used to train and operate AI models is handled securely and responsibly is crucial, especially when it involves sensitive health data [17]. This includes paying careful attention to how data are collected, stored, and shared.
2. Accountability issues
The opacity of AI models, particularly deep-learning systems, is also a concern [18]. These “black box” systems often do not provide easy insights into their decision-making processes, which can complicate clinical decision-making and accountability [19]. For its effective use in medicine, AI processes must be transparent and explainable [20-22].
3. Bias and fairness issues
Another critical issue is bias and fairness [23-25]. AI systems can unintentionally perpetuate or even worsen existing biases if trained on nondiverse or nonrepresentative datasets. This could lead to biased treatment recommendations or diagnostic outcomes, disproportionately affecting marginalized groups.
4. Informed consent and patient autonomy issues
Informed consent and patient autonomy are further areas of concern [26,27]. Patients may not fully comprehend the extent of AI’s role in their diagnosis or treatment, potentially affecting their ability to make informed health-related decisions [28]. Additionally, as AI increasingly handles tasks traditionally performed by humans, medical professionals could become overly reliant on AI, potentially leading to the deskilling of the workforce [29,30]. This could diminish the healthcare workers’ ability to make nuanced decisions without AI assistance [31,32].
5. Liability issues
Determining the liability when AI systems err remains a complex challenge [33]. The question of who is responsible for AI-related misdiagnosis or treatment failure continues to spark debate [34,35]. The use of AI could require redefining standards of care and adjusting the legal definitions of negligence and malpractice [36]. Large language model (LLM) AI might produce “hallucinations” or unreliable outputs, which could mislead clinicians and lead to potential malpractice [37]. LLM AI-generated advice may be treated similarly to other forms of third-party medical advice, which have historically had mixed acceptance in terms of establishing standard care in legal terms.
6. Regulatory compliance issues
Finally, regulatory compliance is another significant issue [38,39]. AI tools must adhere to medical laws and regulations such as the Health Insurance Portability and Accountability Act (HIPAA) in the United States [40], which protects patient privacy, and the General Data Protection Regulation (GDPR) in Europe, which governs data protection [41,42]. However, the setup of research environments should adapt based on each nation’s unique medical infrastructure and level of industrial support [43].
Addressing these ethical issues is crucial for the responsible development and application of AI in healthcare to ensure that it enhances patient outcomes without compromising ethical standards or patient trust.
7. Cultural differences and perceptions of ethics in AI and medical research
Cultural differences profoundly influence how doctors, patients, and their families handle ethical dilemmas in medicine [44-46]. A comparative study of medical professionalism in China and the United States highlighted notable cultural distinctions and parallels between the two nations, examining Chinese practices such as family consent and familism, alongside the contentious issue of patient autonomy [47]. Furthermore, Western and Asian cultures significantly differ in their emphasis placed on patient rights [48]. Navigating these cultural variances is essential for international collaboration in AI and medical research, ensuring that technologies are developed and utilized in a manner that respects diverse ethical standards and values.
Strategies for resolving ethical issues in AI use in medical research
Initially, determining the level of autonomy that AI should have in completing tasks is crucial, particularly in relation to their impact on patient outcomes [49,50]. The extent of AI autonomy varies significantly depending on the severity and importance of these tasks. Thus, a consensus is needed on the extent to which AI autonomy will be granted in clinical decision-making processes [51]. Ultimately, the role of AI—whether as an assistive aid or autonomous agent—must first be clearly defined [52]. For example, simpler and well-defined tasks such as administrative duties and data management are more suitable for AI automation [53]. By contrast, complex decision-making tasks that require human empathy and understanding should remain under human control [54,55].
To ensure the accuracy of the decisions made by AI, its performance must be rigorously validated [56]. Clinical trials to evaluate these processes should be conducted before AI systems are used in practice [57]. Once these systems are commercially available and implemented in clinical practice, a continuous monitoring process is crucial to ensure that the AI systems operate as intended [58,59]. Several rigorous methods have been employed to validate AI systems for healthcare applications and to ensure their efficacy and safety [60]. Trial validation involves deploying systems in their intended clinical environments to monitor the real-time performance and user interactions. Simulation test systems in virtual environments that mimic complex medical scenarios provide a safe platform for evaluating potential risks without endangering patients. Model-centered validation focuses on the AI model itself using data-driven techniques to verify its predictive accuracy and reliability across diverse clinical datasets. Additionally, expert opinion plays a crucial role as healthcare professionals assess the system’s practicality and adherence to medical standards. These multifaceted validation approaches are critical for integrating reliable AI tools into medical practice and ultimately enhancing patient care and clinical outcomes.
As mentioned previously, regulations and ethical guidelines are essential to ensure that AI tools comply with health privacy laws and meticulously protect patient data. The implementation of AI auditing processes should be guided by ethical standards that prioritize patient welfare and equity [61,62]. Healthcare providers must understand the decision-making processes of AI tools to make informed judgments regarding how AI-influenced outcomes affect patient care [63]. This includes ensuring that all processes implemented using AI obtain informed consent from patients concerning AI involvement in their care, the information it processes, and its influence on their treatment.
To implement these processes in practice, revising current consent processes to explicitly inform patients about how AI is used in research and treatment and detailing both the benefits and risks is vital. Consent must be informed and voluntary, with clear options for opting out [64]. Aligning these guidelines with international standards has promoted global consistency [65]. Transparency and explainability should be emphasized to enable both clinicians and patients to understand AI decisions, thus building trust and facilitating informed clinical decision-making [21,66,67]. Openly publishing details on AI training processes, datasets, and performance metrics is essential.
Training healthcare professionals is necessary to ensure safe AI applications in patient care [68]. They must learn to use AI tools safely and accurately [69]. As AI is expected to take over many tasks currently performed by healthcare providers, the risk that human skills may diminish remains high. Consequently, health professionals should focus more on areas in which AI systems cannot make accurate final decisions because of the irreplaceable nature of certain tasks [70]. This necessitates clear protocols to define the depth and range of AI involvement in patient care. The entire process involves risks that can affect patient outcomes. Therefore, establishing a backup system for AI tools is crucial to prepare for scenarios where tracing the decision-making process is possible [71,72]. A trial-and-error period seems inevitable as AI continues to develop and improve. The role of AI must be adjusted based on real-world requirements and evolving ethical standards. It is important for AI systems to enhance healthcare services without compromising patient safety and prognosis.
Compliance with data protection regulations such as the GDPR and HIPAA is essential. Establishing comprehensive data governance frameworks is crucial for dictating the collection, storage, and use of patient data, thereby ensuring anonymity and safeguarding against breaches. Robust anonymization methods are vital for protecting patient information prior to its use in machine learning training. Furthermore, securing patient consent, maintaining data integrity, and implementing secure data management and storage protocols is imperative to adhere to relevant legal standards [73]. Various methods have been introduced to ensure that images remain useful for medical research and diagnosis while removing personally identifiable information to protect patient privacy, such as de-identifying facial features in magnetic resonance images, encrypting patient identifications, and modifying personal data fields [74,75].
To reduce bias, AI models should be trained using diverse datasets representing various demographics, and regular audits of AI systems for biases and discrepancies in performance across different groups are necessary to ensure broad applicability and fairness [76]. Addressing bias in AI systems is also necessary to ensure that they deliver trustworthy and fair results, which necessitates a continuous effort to identify and mitigate potential biases in AI systems to ensure that they are accurate and equitable [77]. To evaluate fairness, the “fairness score” and “bias index” were introduced, and the researchers suggested that fairness certification is crucial for the broader acceptance and trustworthiness of AI systems [78].
Defining the accountability clearly for AI decisions in a clinical setting is essential. This task encompasses AI developers, healthcare providers, and organizations that deploy AI solutions. AI tools must embody transparency to facilitate proper regulation and meet societal demands for accountability, particularly when unexpected outcomes arise [79]. Legislative initiatives, such as the Algorithmic Accountability Act of 2022 in the U.S. and the European Union AI Act, propose comprehensive measures. These legislative frameworks mandate that developers extensively evaluate the impact of their AI systems, including potential biases and discriminatory outcomes [80]. There is a pressing need for precise definitions of risk and accountability in the AI domain. Additionally, standardizing risk governance methods for practical industrial applications is crucial. We must clearly define AI-related risks and develop standardized risk governance and management strategies that are applicable across the sector. Five characteristics are essential for AI risk-management methodologies: balance, extendibility, representation, transparency, and long-term orientation. These attributes ensure that AI systems are accountable, sustainable, and ethically aligned with clinical needs and societal expectations [81].
Encouraging collaboration between technologists, ethicists, healthcare providers, and patients is crucial for a holistic approach to AI development and implementation [16,67,82]. This has led to better-designed AI systems that respect ethical norms and are more effective in clinical settings. Although no singular global legal framework specifically governing the use of AI in healthcare exists [83], adopting these strategies can help address the ethical risks associated with deploying AI technologies, such as ChatGPT, in medical research. By implementing these measures, we can enhance the benefits of AI while reducing its potential harm.
Cross-cultural training provides researchers and professionals with workshops, seminars, and programs that promote cultural competence and ethical sensitivity [84,85]. Global ethical standards should be developed to honor local values while upholding international norms. Collaborative international research encourages partnerships to enhance mutual understanding and integrate diverse ethical perspectives [86]. Community engagement involves public consultations and advisory panels to gain local insights, helping researchers grasp cultural nuances [87,88]. Transparent communication ensures information about AI and medical research is clear and culturally appropriate. Ethics reviews should include cultural experts to tackle potential cultural and ethical issues. Finally, adaptive technology design enables the customization of AI systems and research protocols to various cultural settings, supporting multiple languages and flexible interfaces [89].
Engaging a diverse array of stakeholders in the development of policies and guidelines for AI and medical research is crucial. Including ethicists, technologists, patients, and practitioners from varied backgrounds ensures that these guidelines are comprehensive and that AI systems are capable of nuanced decision-making. This inclusive approach is vital for crafting policies that address the multifaceted impacts of AI technologies across different segments of society [90,91]. Moreover, the implementation of robust feedback mechanisms is essential for the ongoing refinement of AI systems. Such mechanisms enable stakeholders to report on how AI applications affect their lives, providing critical insights that can lead to enhancements in both functionality and ethical alignment [92]. Participatory design plays a pivotal role in AI development by involving end users and patients directly in the design and testing phases. This strategy results in innovations that are not only user-friendly but also deeply align with the diverse needs and values of various user groups [93].
Conclusion
The integration of AI technologies like ChatGPT into medical research offers substantial transformative potential but also poses significant ethical challenges. These include concerns related to privacy, bias, accountability, informed consent, regulatory compliance, and liability. To responsibly deploy AI in healthcare, it is crucial to establish a clear ethical framework that defines AI’s role in clinical decision-making, ensuring it enhances transparency and patient trust.
This entails rigorous validation through clinical trials, ongoing post-implementation monitoring, and the creation of comprehensive data governance frameworks that prioritize privacy and security. Moreover, developing diverse datasets is essential to reduce bias and promote equitable healthcare outcomes. Engaging a wide range of stakeholders, including technologists, ethicists, healthcare providers, and patients, will ensure the ethical alignment of AI systems and their alignment with actual clinical needs.
Ultimately, by maintaining high ethical standards and fostering a collaborative development environment, AI can be leveraged to make significant advances in healthcare that are both innovative and ethically responsible.
The future of AI in healthcare is poised to enhance clinical decision-making, patient care, and operational efficiencies significantly but it requires careful management to address potential ethical, regulatory, and practical challenges. Overall, the prospective view is one of cautious optimism, advocating for a balance between leveraging the potential benefits of AI in medical research while diligently addressing the ethical and practical challenges that accompany such technology.
Notes
Conflicts of interest
Hyunyong Hwang is an editorial board member of the journal but was not involved in the peer reviewer selection, evaluation, or decision process of this article. No other potential conflicts of interest relevant to this article were reported.
Funding
None.
Author contributions
Conceptualization: SY, SSL, HH. Data curation: SY, SSL, HH. Formal analysis: SY, SSL, HH. Methodology: SY, SSL, HH. Project administration: HH. Supervision: HH. Validation: HH. Visualization: SY, SSL, HH. Writing - original draft: SY. SSL. Writing - review & editing: HH.