- The ethics of using artificial intelligence in medical research
-
Shinae Yu, Sang-Shin Lee, Hyunyong Hwang
-
Received September 7, 2024 Accepted November 4, 2024 Published online December 6, 2024
-
DOI: https://doi.org/10.7180/kmj.24.140
[Epub ahead of print]
-
-
Abstract
PDFPubReader ePub
- The integration of artificial intelligence (AI) technologies into medical research introduces significant ethical challenges that necessitate the strengthening of ethical frameworks. This review highlights the issues of privacy, bias, accountability, informed consent, and regulatory compliance as central concerns. AI systems, particularly in medical research, may compromise patient data privacy, perpetuate biases if they are trained on nondiverse datasets, and obscure accountability owing to their “black box” nature. Furthermore, the complexity of the role of AI may affect patients’ informed consent, as they may not fully grasp the extent of AI involvement in their care. Compliance with regulations such as the Health Insurance Portability and Accountability Act and General Data Protection Regulation is essential, as they address liability in cases of AI errors. This review advocates a balanced approach to AI autonomy in clinical decisions, the rigorous validation of AI systems, ongoing monitoring, and robust data governance. Engaging diverse stakeholders is crucial for aligning AI development with ethical norms and addressing practical clinical needs. Ultimately, the proactive management of AI’s ethical implications is vital to ensure that its integration into healthcare improves patient outcomes without compromising ethical integrity.
|