UK Prime Minister <a href="https://www.thenationalnews.com/world/uk-news/2023/11/01/rishi-sunak-to-lead-talks-at-first-ai-safety-summit/" target="_blank">Rishi Sunak</a> sided with catastrophist visions of artificial intelligence on Thursday as he said regulatory steps had to be taken to ensure the technology did not go rogue to cause “pandemics and nuclear war”. At the <a href="https://www.thenationalnews.com/opinion/comment/2023/10/30/why-we-should-take-sunaks-ai-summit-seriously/" target="_blank">AI safety summit in Bletchley Park</a> outside London, industry leaders and government officials are assessing the '<a href="https://www.thenationalnews.com/business/technology/2023/10/31/ai-uk-how-rishi-sunaks-safety-summit-is-powering-technology/" target="_blank">Terminator' risk that machines take over from humans </a>versus the significant benefits from progress. Mr Sunak met with UN Secretary General Antonio Guterres, EU Commission President Ursula von der Leyen, Italian Prime Minister Giorgia Meloni and US Vice President Kamala Harris after the launch of a 28-country declaration on future oversight of “frontier systems”. The UK leader announced governments and AI companies had agreed with leading AI companies and nations including the US and Singapore, to partner with the institute in operating a hub for testing the safety of new AI models before they are released. "This partnership is based around a series of principles which set out the responsibilities we share," he said. "The point I would make is that what we can't do is expect companies to map their own homework," he said. "I don't think people would expect that in other walks of life. It's incumbent on governments to keep their citizens safe and protected. And that's the approach we take to everything else. That's the approach we'll take here. That's why we've invested significantly in our AI Safety Institute." Speaking as the summit ended he outlined his vision for the country's global AI future alongside his cautionary tales society-wider dangers from the adoption of the technology. "We will work together on testing the safety of new AI models before they are released," he said. At the outset of the meeting, he had warned the scale of risk crossed borders and sectors, potentially including life-threatening meltdowns. “There’s debate about this topic. People in the industry themselves don’t agree and we can’t be certain," he said. “But there is a case that it may pose a risk on a scale like pandemics and nuclear war and that’s why, as leaders, we have a responsibility to act to take the steps to protect people, and that’s exactly what we’re doing.” At the historic code-breaking base Bletchley Park on Wednesday, Elon Musk said AI was “one of the biggest threats” humanity faces and said it was “not clear to me if we can control such a thing” when for the first time, humans faced “something that is going to be far more intelligent than us”. “It's one of the existential risks that we face and it is potentially the most pressing one if you look at the timescale and rate of advancement,” he said. A “<i>Terminator</i> scenario” – a reference to the Arnold Schwarzenegger film where machines take over the world – was also discussed by UK Science Secretary Michelle Donelan. “That is one potential area where it could lead but there are several stages before that,” she said. Ms Donelan said the government had a responsibility to manage the potential risks, but also said AI offered “humongous benefits”. Meta Platforms chief AI scientist Yann LeCun pointed to DeepMind co-founder Demis Hassabis, who is influential in Downing St, as a promoter of Doomster views of the technology risks. There are fears that the most advanced parts of the industry are keen on “regulatory capture” of emerging government policy. Ciaran Martin, the former head of the UK’s National Cyber Security Centre, said on Twitter on Thursday that the undertone of the summit was a “genuine debate between those who take a potentially catastrophic view of AI and those who take the view that it’s a series of individual, sometimes-serious problems”. Government officials said a follow-up meeting would be held by South Korea in six months and a full-scale annual meeting would take place in France in a year. For those gatherings, the most immediate risks posed by AI are misinformation, disinformation and deepfakes. The US and the UK have set up AI safety institutes to draw up standards for testing AI models for public use while Mr Sunak has also proposed a global expert panel on AI, similar to the UN climate change panel. Mr Sunak is also scheduled to discuss AI with Mr Musk in a conversation that will be streamed on X, formerly Twitter after the summit ends. One participant in the talks, Tino Cuellar, the president of Carnegie Endowment for International Peace, said the launch of the institutes raised hopes for a network of regulation and overall framework that operated along the lines of UN-led co-operation on combating climate change. “The reality is a lot of the conversations we're having here are underpinned by some degree of consensus around what the state of science is but that's a really tricky subject as we've seen climate change,” he said. “There'll be ideally a network so these institutes can share information and do research on problems that range from the more sensitive national security questions to the broader safety questions that affect the entire world. “I sense real enthusiasm for the idea of generating a panel of scientists to work on a semi-regular or a regular report on the state of AI progress.” A YouGov poll of the UK public found low public confidence that governments could rein in AI. It said 42 per cent had not very much confidence and 29 per cent said they had no confidence at all.