Presentation Title: “AI diagnosis and informed consent”
Abstract:
Some research has shown that artificial intelligence (AI) can outperform doctors in diagnosing diseases such as cancer and skin disorders. However, the actual practice of using AI for diagnosis in clinical environment is limited in Japan, and it will take more time before it becomes widespread.
In such a transitional period, an ethical question arises as to whether or not appropriate informed consent requires doctors to refer to AI diagnosis as an alternative option. This question becomes especially crucial when doctors are reluctant to disclose information about AI diagnosis for personal reasons. In the presentation, I will consider this issue through a hypothetical case in which a doctor working at a university hospital does not inform a patient of the alternative option of AI diagnosis for personal reasons, even though it is recognized that AI can outperform human doctors in that field. If the doctor overlooked a disease that human doctors often do but AI would not, and if the patient dies or becomes critically ill due to the doctor’s diagnosis, should the doctor be accused of medical malpractice?
The use of AI in diagnosis can potentially change the content of appropriate informed consent. I claim that doctors may not have a legal obligation to disclose AI diagnosis as an alternative option unless it becomes prevalent to a certain degree in clinical environment. However, there is a moral obligation to inform that AI can outperform human doctors if research has shown so in that medical filed.
There are legal and moral levels in informed consent: there is a certain level that doctors are legally obligated to inform regarding diagnosis and there is a further level of explanation that doctors are morally obligated to provide in order to empower patients’ autonomy. The separation of the two levels in informed consent correspond to the concept of negative and positive liberty. The concept of negative liberty, which allows patients to exercise autonomy free from authority, aligns with the legal level of informed consent, while that of positive liberty, which allows patients to have control over their pursuit of a good life, aligns with the moral level of informed consent.
A patients’ choice to be diagnosed either by a human or an AI challenges personal convictions surrounding medical care. The Pew Research Center found that while a larger share of U.S. participants believe that AI can reduce medical mistakes, 60% of participants expressed discomfort with introducing AI in diagnosing. This research illustrates that people’ value for medical care is not solely about accuracy of diagnosis.
In this sense, having information about AI diagnosis is crucial for patients’ positive liberty if it is better than human doctors. Based on this reason, I will suggest that doctors have a moral obligation to disclose alternative option of AI diagnosis as a precondition of attaining informed consent for treatment.