At the recent Global Future Councils meeting, the UAE’s artificial intelligence minister issued a stark warning:<b> </b><a href="https://www.thenationalnews.com/future/technology/2024/10/14/gitex-ai-uae/" target="_blank">without proper safeguards</a>, AI could spiral out of control. “We do not have time to afford to wait for this to get out of hand,” <a href="https://www.thenationalnews.com/future/2024/10/16/ai-will-get-out-of-hand-without-boundaries-uae-minister-warns/" target="_blank">Omar Al Olama said</a> at the World Economic Forum event. He called for <a href="https://www.thenationalnews.com/future/technology/2024/10/11/uaes-new-ai-foreign-policy-aims-to-prevent-misuse-of-technology/" target="_blank">proactive regulation</a> to prevent the repetition of past mistakes, noting that governments are only now addressing the fallout from social media more than two decades after its rise. Mr Al Olama’s comments have thrust the <a href="https://www.thenationalnews.com/future/technology/2024/10/11/uaes-new-ai-foreign-policy-aims-to-prevent-misuse-of-technology/" target="_blank">risks of AI into the spotlight</a>. We are approaching a critical juncture, where AI could act beyond human control. That could bring about significant harm to business and society. This is why I launched the AI Safety Clock in September, to ignite a necessary conversation about the risks and opportunities posed by AI. I wish to raise awareness rather than alarm. Currently, the clock rests at 29 minutes to midnight, signalling that while catastrophe is not imminent, the risks are far from distant. The implications for businesses touch every aspect of operations, strategy and ethics. As these technologies grow more autonomous and sophisticated, companies must not only consider the efficiency gains and competitive advantages they offer, but also the long-term risks. Uncontrolled AI systems could disrupt entire industries, either by displacing jobs or by making critical, unregulated decisions that could have an impact on everything from supply chains to consumer trust. Moreover, businesses that fail to introduce responsible AI governance risk a regulatory backlash, reputational damage or legal liabilities. I believe that organisations should be investing in ethical AI frameworks and collaborating with regulators to ensure that innovation does not come at the cost of social stability. The risks are complex and wide-ranging, rooted in the possibility that AI systems could one day surpass human intelligence across multiple domains and make decisions independently. This is no longer the realm of science fiction, according to Elon Musk. “My guess is that we’ll have AI that is smarter than any one human probably around the end of next year,” the business mogul, who runs Tesla, X and SpaceX, said recently. Others, such as OpenAI’s chief executive Sam Altman and Meta’s Yann LeCun, believe that it will take a bit longer, up to a decade. The most visible and alarming dangers are tied to the possibility of AI gaining control over physical infrastructure. AI systems, integrated into military technology or power grids, could pose a major threat if they make unsupervised decisions about critical resources like nuclear arsenals or energy networks. Beyond those physical dangers lies the more subtle – yet equally concerning – risk of economic manipulation and mass surveillance. As these technologies become more integrated into financial systems, there is the potential for AI to interfere with global markets or political processes. The growing use of AI in social media and financial transactions raises the spectre of technology being used to destabilise economies or influence elections – issues that have already surfaced in recent years, such as the Cambridge Analytica scandal during the 2016 US presidential race. Another major concern is the impact of AI on employment. While automation has been displacing jobs for years, the advent of generative AI that churns out content in seconds could accelerate this trend. The World Economic Forum’s <i>Future of Jobs Report 2023</i> predicts that technologies like AI could eliminate 83 million jobs by 2027, while creating 69 million new roles, resulting in a net loss of 14 million jobs. This poses a serious risk to social stability. The spread of misinformation through deepfakes and AI-generated content is yet another clear and present danger. Already, we are witnessing the growing use of AI to create convincing, yet false, media that can influence public opinion. Regulation, or the lack thereof, will be important in determining how close we are to the tipping point at which these AI systems are beyond human control. While technology drives us forward, regulation has the potential to slow down the clock. Today, global AI regulation remains fragmented and inconsistent. The recent veto of an AI safety bill in California highlights the tension between innovation and control – without a unified regulatory framework, especially among major global players like the US, Europe and China, AI development could continue at a dangerous pace. International collaboration is needed to ensure that safety measures keep up. Governments need to work together to create an international framework for AI governance, like existing bodies that oversee nuclear or chemical weapons. Regulatory frameworks should be designed to manage the risks without stifling innovation.<b> </b>One important thing will be to have a kill-switch designed to allow humans to shut down an AI system if it begins to operate in an uncontrolled or dangerous way. Corporations, too, have a responsibility to manage these risks. Technology companies developing AI systems, like OpenAI and Google, need to prioritise safety and ethical considerations from the outset. This means integrating responsible practices into every stage of the development process. Internal governance structures should also include teams focused on assessing potential risks. In the broader AI research community, there is no consensus on how close we are to developing uncontrolled AI, with some experts suggesting it could happen in a matter of years and others arguing it may never happen. However, the lack of certainty is itself a reason to act now. Governments, corporations and researchers should collaborate to ensure that as AI grows more powerful, it remains under human control. The AI Safety Clock serves as a stark reminder that while we may not be on the brink of disaster, the time to act is now. <i>Michael Wade is the Tonomus professor of strategy and digital at IMD and director of the Tonomus Centre for Digital and AI Transformation</i>