Few technologies have stirred debate quite as much as artificial intelligence. A significant amount of the uncertainty surrounding AI can be attributed to the flawed idea that it will somehow lead to human beings losing control of our work, lives, society or even our humanity. This misconception has grown around a misunderstanding of how most AI systems work, and confusion around Artificial General Intelligence – the type of AI superintelligence that remains theoretical but has been the inspiration for many science-fiction movies. The real question we should be asking is this – is it appropriate to compare artificial intelligence with human intelligence? They are fundamentally different – each designed to excel in specific tasks. AI strives to surpass human capabilities in areas such as content creation and question answering. However, the path it takes to fulfil these tasks differs from human cognitive processes. While humans learn from small data, use multiple senses and operate with energy efficiency, AI relies on substantial computational resources and vast data to absorb, categorise and transform information into machine-friendly representations. Over the past 60 years, AI has evolved into a foundational discipline influencing every facet of science and life. It is akin to a future version of mathematics, endowed with the ability to automate operations, operate devices and solve complex problems. The journey of AI has been marked by waves of transformation, adapting to available theories, technologies and the evolving problems that we aim to solve. Modern AI integrates mathematical principles and data-driven empiricism, exemplified by foundational models powering GPTs and similar innovations. This evolution, fuelled by an unprecedented blend of data, computational power and algorithmic innovation, empowers AI to address challenges in ways unfamiliar to human logic. For instance, ChatGPT demonstrates artificial general intelligence, solving problems and creating content previously regarded as the preserve of human expertise. Consider, for a minute, why people prefer to see a doctor or physician who is mature and can draw on decades of experience, during which time she or he has encountered thousands of patients. Almost by default, this gives patients greater trust that this particular doctor can extrapolate her or his experience for them and apply it to their specific needs. The best doctor in the world is only one person, but AI has the power to become the ultimate assistant for healthcare practitioners. It can analyse vast troves of anonymised data, from healthcare records to medical scans, and learn to diagnose illnesses and conditions much more rapidly than a single human being. By analysing data from millions of cases, AI can detect patterns and provide healthcare professionals with new insights. It can even make suggestions about what might be causing a patient to suffer from symptoms that do not always make sense, even to an experienced doctor. The heavy lifting that AI performs as it digests and analyses data is based on the work of human medical professionals, which means its insights have been gleaned from the learning, dedication and wisdom of our fellow human beings. Health care is just one key example of the way in which AI builds on human learning to improve and transform processes, but the same principle applies across all sectors, from manufacturing to agriculture, and logistics to education. To take one of these examples – education – it is possible to use anonymised data to gain nuanced insights into the effectiveness of various teaching techniques and resources. This is done by analysing how students have performed in tests after using certain resources. This approach can help schools to make more informed decisions about how they implement their curriculums. But AI also has the power to recognise and assist the individual. For example, it can help to diagnose conditions such as dyslexia, autism and attention deficit hyperactive disorder. These can have a severe effect on a child’s education, but often go undiagnosed. Fundamentally, AI is rooted in human innovation. Its inception traces back to the 1950s, notably to the Dartmouth Summer Research Project on Artificial Intelligence, led by John McCarthy, then an assistant professor of mathematics at Dartmouth College. This initiative aimed to explore the concepts that form the basis of modern AI. However, the origins of AI can be traced even further back, encompassing advancements such as Boolean algebra in the 19th century and Charles Babbage’s vision of a mechanical computer in the 18th century. AI’s algorithms have been developed by people, based on a mathematical tradition stemming back centuries, even millennia. It distils the knowledge and wisdom of millions of people for the good of humanity – and the whole process will continue to be overseen by people through the continued development of effective guardrails and regulation, a commitment to transparency, and ongoing public discourse about the direction and use of AI. Today, we are privileged to live in a time when computing power and communications networks have become powerful and fast enough to support AI. But there have been many false dawns in AI, and it is now time to ensure that the technology reaches its true potential. AI, at its best, will be a sublime tool that gives us unprecedented access to the very best of humanity.