Even before the high-profile arrival of chatbots and image generators, AI had already quietly embedded itself in everyday life. It can recognise your face to open your phone, it can translate foreign texts while you travel, it can help you navigate through traffic and roadworks. It can even pick movies for you at the end of the day. But the chatbot revolution has been accompanied by ominous warnings that compare AI’s growing utility to existential threats such as nuclear armageddon or natural disasters. Online influencers have invoked the spectre of an omniscient AI and abstract – often absurd – claims. These have been amplified by some big names in academia and business who have lent their authority to the doom-laden outcry, fuelling public fear and anxiety, rather than embracing the rational analysis and rigorous evidence that an educated society deserves. The voice of real researchers and innovators at the cutting edge of today’s science risks going unheard or being drowned out. A closer look at actual existential threats lays bare the exaggerations surrounding AI's alleged dangers. The indelible scars left by nuclear weapons at Hiroshima and Nagasaki, the ravages of pandemics like Covid-19 and the melting glaciers caused by climate change are stark reminders of real and present danger. The dystopian portrayal of AI owes more to sensationalism than scientific substance. Unlike the immediate cataclysm of nuclear weaponry or the relentless assault of climate change, AI's purported threat dwells firmly in the realm of science fiction. HAL-9000, Skynet, Ultron are all familiar villains, supposedly artificial intelligences who turn on their creators. The reality of AI is very different to the practical problems we try to solve as research scientists. The term “AI” itself covers a vast array of scientific domains, technological innovations, artifacts, and human engagements. It is laden with misinterpretations and misuse as discussions veer off course. Misleading predictions of future threats are based on scientifically unsound extrapolations from a few years’ growth curve of AI models. No technological growth curve ticks up indefinitely. Growth is bounded by physical laws, energy constraints and paradigm limitations, such as we see in genetically-modified crop production, transistor density in semi-conductor chips, and FLOPS – performance – seen in super computers. There is no evidence that current software, hardware or mathematics will propel us towards artificial general intelligence (AGI) and beyond without major future paradigm disruptions. The risks of transformer-enabled AI (the main methodology behind AI chatbots like ChatGPT) pale in comparison to the potential of gene editing for all living organisms. There are fundamental holes in the AI doom-mongers’ reasoning and conclusions — evidenced by the astoundingly big jumps in establishing and justifying their theory. Imagine someone invented a bicycle and was quickly able to peddle it to higher and higher speeds within a short amount of time, progressing through exercise and training. With an electric motor and lighter materials the bike goes faster still. Would we believe that the bike could be ridden until it flies? It is not difficult to see the absurdity of such reasoning, but this is exactly the current AI narrative: AI becomes encyclopaedic through Generative Pre-trained Transformers. Next, AI leaps to become AGI. Then it becomes an Artificial Superintelligence, or ASI, complete with emotional intelligence, consciousness and self-reproduction. And then, another big jump – AI turns against humans and without deterrence is able to extinguish humanity, using sci-fi methods such as causing vegetation to emit poisonous gas, or figuring out a way to deplete the energy of the Sun, according to some recent scenarios presented at an Oxford Union debate. Each of these jumps requires utterly ground-breaking advances in science and technology, which are likely impossible. Many of the assumptions made in such jumps are logically unjustified. But these stories risk capturing the public imagination. These AI sceptics – whether intentionally or not – are ignoring the obligation of scientific proof, and panicking the public and governments, as we have seen at the recent AI Safety Summit held at Bletchley Park in the UK. The regulation being pushed is not intended to prevent ludicrous existential risks. It is designed to undermine the open-source AI community that poses a threat to the profits of big tech,. Over-regulation to beef up the cost of AI development benefits a small number of rich parties only. Ironically, the existential-threat scenario ignores human agency. It was not technology but basic human management systems that lay behind disasters like Chernobyl, and the tragedy of the Challenger space shuttle explosion. Contrary to the physical sciences, which engage with the real world, AI’s realm is predominantly digital. Any AI interaction requires many more steps of human agency, and opportunities for checks and controls than any technology that experiments directly with the physical world, such as physics, chemistry and biology. AI doomerist rhetoric hides the fundamental and transcendental benefits to society and civilisation that come with scientific advances and technological revolutions. It does little to inspire and incentivise the public to understand and leverage science. History is full of examples where technology has served as a catalyst for human advancement rather than a harbinger of doom. Tools like the compass, the book and the computer have taken us on real and intellectual voyages from the deepest oceans to the edge of the universe. The existential-threat narrative hinges on AI transcending human intelligence, a notion bereft of any clear metrics. Many inventions – like microscopes and calculators – already surpass human capabilities, yet they have been greeted by excitement, not fears of extinction. In reality, artificial intelligence is ushering in a 21st-century “renAIssance”, fundamentally changing how we gain knowledge and solve problems. Unlike the original Renaissance, which lead to the Age of Enlightenment, and was defined by a rational, foundational approach to science, this era is taking us to an Age of Empowerment. The historical Renaissance was enabled by the technology of printing and the market of publishing, allowing the rapid diffusion of knowledge through Europe and beyond. Early science gave this knowledge structure through “knowing how to think.” Figures like Newton and Leibniz championed and defined this rationalism. They and their contemporaries set the stage for a methodical science rooted in first principles. For centuries, the science they created moved forward by forming hypotheses, unravelling core ideas and validating theories through logic and methodical experimentation. Modern AI is now reshaping this classical problem-solving approach. Today the amalgamation of vast datasets, advanced infrastructure, complex algorithms, and computational power heralds a new age of discovery that goes far beyond traditional human logic. It promises a science characterised by radical empiricism and AI-guided insights. Today’s AI RenAIssance goes beyond the “how” to delve into the “why”. It arms individuals not merely with knowledge, but with the tools for real-world problem-solving, marking a shift towards a practical approach. AI unveils a spectrum of possibilities in fields like biology, genomics, climate science and autonomous technology. The hallmark of this era is the resurgence of empiricism, fuelled by AI’s data-processing prowess, enabling automated knowledge distillation, organisation, reasoning and hypothesis testing, and offering insights from identified patterns. It opens the way for alternative methodologies of scientific exploration, for example, through extremely high throughput digital content generation, complex simulative prediction and large-scale strategic optimisation, at a magnitude and speed massively exceeding what traditional first-principle based methods and causal reasoning would be able to handle. This means unprecedented real opportunities for humans to tackle previously impossible challenges such as climate change, cancer and personalized medicine. This modern Renaissance fosters continuous learning and adaptation, moving society from an insistence on understanding everything prior to acting, towards a culture of exploration, understanding and ethical application. This mindset resonates with past empirical methodologies, advocating a humble approach to gaining knowledge and solving problems. Like Prometheus stealing fire for humanity, AI has emerged as a potent yet not fully grasped tool to propel our civilisation forward. We need the humility, the courage – and the freedom – to take this tool and use it.