Imagine 2027, a time when AI is <a href="https://www.thenationalnews.com/health/2024/10/11/g7-countries-plan-to-use-ai-to-full-potential-in-health-care/" target="_blank">expected to outperform doctors</a>. A patient walks into a clinic. The doctor learns their history, performs an examination and orders tests. He or she feeds the data into an AI system that creates a diagnosis and a treatment plan. The doctor uses both to treat the patient. We return to 2024. A doctor reviews the diagnosis and plan from the AI system before treating the patient because the AI-generated output is not as accurate or reliable as the human output. In 2027, will a doctor review the <a href="https://www.thenationalnews.com/news/uae/2024/10/10/teens-manage-severe-juvenile-arthritis-in-uae-heat-as-doctors-turn-to-ai-for-quicker-treatment/" target="_blank">AI-generated diagnosis</a> and management plan before treatment? Would human intervention be unethical when we might worsen the AI output? A doctor’s fundamental ethical obligation is “first, do no harm”. It is important to consider AI’s performance of a doctor’s five key roles. First and foremost, doctors diagnose – and recent evidence shows that AI’s diagnostic effectiveness is improving. A 2023 Harvard Medical School study demonstrated a vision-language model that outperformed doctors with challenging medical cases. A study this year at the University of California, Los Angeles demonstrated that when diagnosing prostate cancer, AI was 17 per cent more accurate than doctors. These are a few examples of AI’s <a href="https://www.thenationalnews.com/health/2024/10/23/britains-nhs-to-trial-superhuman-ai-system-that-predicts-early-death-risk/" target="_blank">improving diagnostic success</a>. In 2027, will we view using AI to diagnose a disease the same way we view using a calculator to subtract? When that day comes, what will the role of a doctor be? Perhaps we should see this future as an opportunity to use this new “calculator” to be better doctors. Next, doctors are supposed to be carers. In the 1800s, Dr Edward Livingston Trudeau described their mission as: “To cure sometimes, to relieve often, to comfort always.” Today, patients are more likely to see doctors’ mission as: “To cure and relieve often, to comfort always.” Can technology help doctors “comfort always”? The answer depends on whether technology is capable of sentience and sapience, and whether humans can accept care from amoral machines. Today, scientists continue to develop AI with self-awareness, but their success has so far been limited. There are precedents for using robots for companionship, comfort and cognitive support. But what is the doctor’s role in these scenarios? How do educators prepare medical students? The hope is that AI will enhance the value of the <a href="https://www.thenationalnews.com/news/us/2024/12/10/us-health-insurers-accused-of-using-ai-to-deny-claims-in-bulk/" target="_blank">humanistic aspects of medicine</a>. AI systems assist with the implementation of evidence-based treatments that increase safety and effectiveness. For example, AI systems can inform doctors in “real time” when a patient’s treatment is no longer effective. These doctors can then encourage patients to, for instance, make healthy-eating decisions. Beyond individual patients, doctors advocate for the improvement of the health of entire communities. During pandemics, doctors mobilise populations to increase vaccination adoption with readily accessible data. Their intervention supports the integrity and authenticity of AI-generated data, as well as the detection and prevention of bias. Doctors are also educators, and they <a href="https://www.thenationalnews.com/opinion/comment/2023/10/09/the-gulf-region-could-benefit-from-more-technology-in-the-classroom/" target="_blank">educate medical students</a> by sharing their skills and knowledge. But skills are constantly changing, and knowledge is increasingly interdisciplinary and includes non-medical disciplines. Few of today’s educators have knowledge of every medical specialty or those disciplines outside of medicine. AI systems can be valuable learning companions supporting multi-disciplinary learning. With competing search engines and an information explosion, students will need to quickly analyse complex data. Doctor-educators must shift from being professors to “pathfinders” to show students how to find the path to the optimum medical and ethical answer. That way, doctors will remain central to the transformation of aptitudes and mindsets to <a href="https://www.thenationalnews.com/opinion/comment/2024/03/06/how-we-can-keep-human-agency-at-the-heart-of-ai/" target="_blank">avoid the dehumanisation</a> of care and education. Another role of doctors is to generate medical improvements from patient data. As AI collects large datasets and subjects them to powerful algorithms, information is collected, synthesised and disseminated instantly. Even with AI as a knowledge-generation catalyst, doctors will remain essential for research and data collection. The majority of doctors will increasingly take on the researcher role in the AI context. It will be vital, then, for developers to mitigate the risk of humans tampering with AI-generated data. They must be aware of the ethical principles that guide doctors, including beneficence, non-maleficence, autonomy and confidentiality, which are especially relevant to the use of generative AI. Humans cannot compete with AI’s capacity and speed. This means that AI is likely to replace humans in jobs faster than humans can transition between jobs. This is not a moment to fear. Instead, doctor-educators must embrace this freedom to discover opportunities and for students to learn, not for the sake of doing, but for the sake of understanding. While guided by data, doctors in the AI world will solve complex problems by asking, not answering. Harnessing AI, doctors will have the freedom to ask effective questions and the time to identify the best answers, scientifically and morally. When asking these questions, the values of equity, social justice and human rights will guide them and they will become effective scientists, data guardians and vigilant protectors of humanity. There is no need to fear 2027. <i>Chen Zhi Xiong is assistant dean in the Yong Loo Lin School of Medicine at the National University of Singapore</i> <i>Tikki Pangestu is a visiting professor in the Yong Loo Lin School of Medicine at the National University of Singapore</i>