The launch of ChatGPT in November 2022 triggered two main reactions regarding AI. On the one hand, various claims were made about its disruptive nature in fields such as science, technology, the military, society, economics and medicine. There are even observers who consider that it is a threat to the very existence of the human species, since it is expected that by 2040-2050 it will have surpassed human intelligence in most areas. There are also concerns that AI-based machines could replace humans in many sectors of the economy, leading to increased unemployment and its correlates: civil unrest, poverty and other social problems.
On the other hand, some voices have drawn attention to the geopolitical implications of the implementation of AI. Governments could, and some already do, use it for national and international surveillance, thereby undermining many of the principles of democratic societies. In such a context, it would be easy to succumb to moral panic without clearly establishing the nature of the threat and end up opting for a moratorium or excess regulation that would prove counterproductive in the long run. It could be the case that certain types of AI-based technology pose challenges to the coexistence between humans and machines; especially with regard to the moral and legal status of humanoid robots and the pervasive nature of that technology in our personal and social lives and, more generally, in society. However, before establishing any restrictions, we must carefully evaluate the potential benefits of such technology (particularly in fields such as medicine) and we must encourage responsible application.
It is indisputable that AI technology is transforming healthcare. It will improve biomedical research in areas such as genomics, skin cancer and diabetic retinopathy diagnostics, or the discovery of new drugs, to name just a few examples. Some of the most promising applications of AI include discovering new cures by quickly and efficiently analyzing large data sets, as well as determining trends and searching for anomalies (predictive analytics: computers use algorithms to learn from data sets – machine learning – instead of following detailed instructions dictated by a human programmer).
AI technology could also offer the possibility of making existing cures more efficient and effective (diagnostic accuracy) and allow new indications for existing medicines. It could also offer capabilities to streamline care delivery by making faster, more accurate treatment decisions and reducing administrative costs by collecting patient data and creating medical records. Finally, it could also have a broad impact on global healthcare delivery. For example, it would allow doctors to connect to a system that uses AI technology to diagnose and treat patients remotely.
Another important aspect to consider when addressing the transformative nature of AI technology in medicine is how it will redefine the relationship between doctor and patient. According to a study by the Dartmouth-Hitchcock health system, the American Medical Association (AMA), Sharp End Advisory and the Australian Institute for Health Innovation, doctors spend more than 27% of their total time on direct face-to-face clinical work and a 49.2% to administrative work and electronic medical records (Sinsky et al., 2016). The advent of AI could help doctors spend more time with patients and make healthcare more personal, albeit using more technology.
Some AI technologies have already been tested. For example, Watson (now called Merative), an AI developed by IBM, has shown that it can make treatment recommendations similar to those of human experts in 99% of cases and has found missed treatment options in 30% by doctors (Lohr, 2016). In addition, Watson performed tasks such as integrating and aggregating data, evaluating patients’ risk of developing a specific disease or requiring high-cost treatment. Improving medical diagnoses by applying AI to clinical practice is also another promising field that is being researched with great attention. For example, machine learning has been used to optimize the diagnosis of a condition that presents symptoms of dizziness (i.e., benign paroxysmal vertigo). It is believed that AI can help correct and make the diagnosis of this condition more efficient and accurate.
As mentioned, machine learning uses algorithms to evaluate massive sets of data, build models, evaluate them, and provide feedback, in some cases without human supervision. This type of technology uses mathematical algorithms to build statistical analysis models that can be improved with experience. The challenge in this process is that an AI must be trained to improve sensitivity and precision to optimize predictability in diagnosis and recommendations for treatment options. However, there is the possibility that biases based on unfair or inaccurate predictors may be introduced or learned and may affect the care given to the patient. The biases raise questions about the application of AI technology.
First, there is the potential for measurement bias, which could lead to misinterpretation of the data set and raises the question of who is responsible if a misdiagnosis causes burden. economic loss, physical and psychological damage or even death. Second, there is also the possibility of ethical bias. Data analysis can be manipulated for profiling or stigmatization. For example, some values ??and standards could be introduced into an algorithm to identify patients at risk of receiving high-cost treatment or with the potential to develop specific health conditions.
In addition to bias, the use of AI in medicine raises important ethical questions about the extent to which AI technology should be independent of human oversight and control, and how harms and risks are assessed and integrated into an algorithm. Additionally, there are concerns about how to consent for patients to receive clinical care dependent on AI technology for diagnosis and determination of optimal treatment options. Last but not least, we must continue to evaluate the psychological repercussions on patients who learn that a machine has decided what affects their care and clarify who is responsible in the event of failure: the manufacturer, the insurance or the supplier.
We are entering a brave new world of healthcare, as medicine is already heavily committed to integrating AI into its practice. However, the emerging paradigm promotes more technology, and it is not clear how it will be possible to improve, if not preserve, the important humanistic dimensions of clinical practice despite claims that doctors will be able to dedicate more time to their patients with the introduction of AI to perform administrative tasks or other routine work. In other words, if AI technology processes health information without human interference or makes final decisions about treatment options, how should doctors and patients perceive it—as a consultant, as an oracle, as an assistant? The potential use of autonomous AI can change many aspects of the doctor-patient relationship and further reduce the role of the doctor to that of technician and service provider (or, as some say, vending machine). Furthermore, patients could become distrustful of AI over human intelligence when their healthcare is at stake, creating an obstacle to good clinical practice and undermining the trust necessary for good healthcare.
Likewise, we must consider the main ethical challenges related to clinical practice. Doctors have a duty to protect patients during the implementation of any major changes in healthcare, including the imminent introduction of AI technology. This means that there are issues related to the pursuit of the patient’s good and the obligation to avoid harm by ensuring that risks are minimized; especially in the case of autonomous AI or robots. Additionally, there is an ethical imperative for truthfulness about AI technology and data generated by machine learning, as well as potential biases built into algorithms. All of this leads to what is often called the black box problem and the lack of transparency about the process that leads to data production. Only the input and output offer information, the intermediate process remains opaque. The issue of access and financing of these technologies also represents a major challenge. In most cases, only those who already have access to the best healthcare will benefit from them.
Finally, a more fundamental question is how AI technologies will affect humans and what will be the nature of the relationship with machines (assistance robots, companion robots…). As we continue down this path, it will be important to preserve and protect our humanity in healthcare as AI technologies are increasingly used. We have to resist the impulse to find largely technological solutions to address clinical, sociopolitical and existential questions of human existence. Rehumanizing healthcare requires a greater need to protect our humanity and question the current techno-scientific framework as a lens for interpreting and understanding the human condition.
It is unlikely that we will be able to stop technological progress, but perhaps we have reached a crucial point in the human adventure. Feeding our technological appetite could be detrimental to human flourishing in the absence of a human-centered approach to AI. Therefore, technological solutions driven by AI to address problems in the social and clinical context must follow ethical imperatives in four key areas:
1)delimitation of the nature of AI technology and the need for transparency about its risks and benefits (information and transparency);
2) public commitment to reach a consensus with interested parties in the analysis of ethical, social, geopolitical and political consequences (participation and consensus);
3)promoting responsible development and application of AI technology in healthcare (accountability), and
4)application of a human-centered approach to AI that promotes respect for human identity, human interaction, and the overall advancement of human goals (humanity).
Modernity has questioned traditional (anthropological) frameworks and expressed skepticism about their usefulness, if not simply considering them problematic. But these frameworks are necessary for human beings to give their lives meaning and purpose. Its rejection has created the current anthropological identity crisis in relation to the ontological status of human beings (that is, what is the human? What is the status of the human in the world?). If we do not clearly define and understand what our humanity consists of, delimiting the scope of the application of AI becomes an arduous task. Medicine continually faces external limitations due to sociopolitical and techno-scientific factors, as well as philosophical premises that redesign the limits of clinical practice. The complexities of that practice, the structure of the healthcare system, and the way decisions are made in the clinical context cannot be captured by algorithms. There is no doubt that AI technology offers new opportunities to improve the quality of care and new treatment options. But some elements of the human condition, such as physical pain, psychological suffering and spiritual anguish, will remain out of reach. AI will profoundly impact healthcare in its development, practice and delivery. It is likely that this incidence will pose a challenge to our humanity. Our task is to find the right balance between responsible acceptance of AI and moral discernment to contain our technological bulimia.
Fabrice Jotterand is a professor at the Center for Bioethics and Medical Humanities at the Medical College of Wisconsin and at the Institute for Biomedical Ethics at the University of Basel.