Fears of technology outpacing man with apocalyptic results have long troubled humans. Now scientists aim to ease your anxieties.
What keeps you awake at night? Rising sea levels or desertification? Overpopulation or the rise of artificial intelligence?
Whatever it is, you can go back to counting sheep now, because a new think tank at Cambridge University in the UK is planning to sweat the big stuff so that you will not have to.
The Centre for the Study of Existential Risk (CSER) is recruiting postdoctoral talent to pull together some of the best minds from various disciplines to think the unthinkable about threats to life on Earth.
Although the CSER will be “looking only at the more extreme scenarios”, says Dr Sean O’Heigeartaigh, point man for the new centre, the aim is not so much to have us all running for the hills clutching survivalist manuals, but rather to save us from the unnecessary trip – either by busting myths or by heading off real threats at the pass.
“We want to look at concerns that have been raised and try to divide up what should actually deserve some rigorous attention from concerns that are either simply science fiction, or that are very much overblown,” he says.
While something like “a pandemic outbreak is always a potent threat”, says Dr O’Heigeartaigh, giving the example of a global influenza virus, he says some concerns are overblown.
Dr O’Heigeartaigh, who holds a doctorate in genome evolution, says: “In my personal view, concerns over, for example, GM foods are massively out of proportion to any potential risk to human health.”
But even when the centre does identify something we ought to be losing sleep over, “we will be trying as far as possible not to be scaremongering, even when there is something where work should be done”, he says.
It will surprise no one who has had to sit in a traffic jam listening to the alarmist views of a cabbie, that the idea for the CSER was born in a back of a taxi.
In 2011, Huw Price found himself sharing a taxi in Copenhagen with a man “who thought his chance of dying in an artificial intelligence-related accident was as high as that of heart disease or cancer”, as he recalled in an article for The New York Times.
Had that man been the driver, the Bertrand Russell professor of philosophy at Cambridge University may have been inclined to nod sympathetically, inject the occasional “uh-huh” into the monologue and look forward to the journey’s end. But this was no cab driver. This was Jaan Tallinn, the wealthy Estonian theoretical physicist and computer programmer who co-created Skype.
The “artificial intelligence-related accident” Mr Tallinn had in mind did not involve stepping out in front of a lorry while chatting to Siri on his iPhone (though that would do it, of course).
His fear was the “singularity” – that predicted moment in the development of computers when technology outsmarts us, with disastrous consequences for human existence. “I knew of the suggestion that AI might be dangerous, of course,” wrote Dr Price, “that once machine intelligence reaches a certain point it could take over its own process of improvement so that we humans would soon be left behind.
“But I’d never met anyone who regarded it as such a pressing cause for concern, let alone anyone with their feet so firmly on the ground in the software business.”
Being a philosopher, Dr Price got to thinking about other potential catastrophic risks to our species caused by the fact that we have, in essence, become too smart for our own good.
Someone, he considered, needed to be taking a cold, hard look at technology and where it might be leading us.
Natural events, such as asteroid impacts and extreme volcanic events, could wipe us out, he wrote. “But in comparison with possible technological risks, these natural risks are comparatively well studied and, arguably, relatively minor.”
The cab ride was the catalyst for the CSER and, back in Cambridge, Dr Price set about recruiting a third person for what would become the organisation’s founding trio – the Cambridge cosmologist Martin Rees, a former president of the Royal Society.
A clue to Lord Rees’s position on matters apocalyptic can be found in his 2004 book Our Final Century, a round-up of everything that could wipe us out in the near future, from asteroids and diseases to nanobots and the Large Hadron Collider.
Dr O’Heigeartaigh has been working on the project in Cambridge for the past year, “raising funds and establishing research networks”. The next job is hiring a multidisciplinary research team, and the CSER will start interviewing postdoctoral candidates next month.
“Our goal,” reads the job advert, “is to bring together some of the best minds from academia, industry and the policy world to tackle the challenges of ensuring that powerful new technologies are safe and beneficial.”
One of the first tasks, says Dr O’Heigeartaigh, will be “horizon scanning, to give us a sense of what risks might be flying under the radar or what might be coming a little further ahead in time”.
It will also be necessary to figure out a methodology for the new science of worrying.
“We will be looking at how we evaluate extreme technological risk – questions like risk-benefit analysis, how much value we place on future development versus present.”
A third project will examine what is really meant by “responsible innovation in science and technology: how, when we are working with very powerful technologies, we develop them in the interests of everybody”.
To achieve this, the centre plans to work with all stakeholders in new technologies – “the people developing them, the policymakers who seek to understand them, the academics who have insights on the various broader impacts, such as the societal impacts, and the public”.
Artificial intelligence will not be the only threat considered by the centre, though its concerns in that area might seem more mundane than the robots-will-rise-up-and-kill-us-all premise best articulated by the Terminator film franchise.
On the one hand, says Dr O’Heigeartaigh, are “the long-term concerns that people like Professor Stephen Hawking and Elon Musk have raised, that relate to a level of development of AI that we’re still nowhere near”.
But meanwhile, there are also “a number of near-term societal impacts that deserve attention now, such as the impact AI is going to have on employment and economics, privacy issues, potential system security issues and issues of liability and accountability”.
Issues such as whom to sue, in other words, when you’re run over by a driverless car.
CSER will rely on input from a team of special advisers, who will flag concerns they think the centre ought to tackle. Among them are professors of philosophy, quantum physics, zoology, computer science, bioethics, law and biotechnology.
The big names tossing ideas CSER’s way include Prof Hawking, the former Lucasian professor of mathematics at Cambridge, and Mr Musk, the co-founder of PayPal and boss of SpaceX, the orbital rocket company.
Dr O’Heigeartaigh says the CSER will not be “raising a flag and saying technology is dangerous”. He says: “Technology is very likely to be necessary for making life possible on a planet facing problems such as how to feed nine billion people.
“That said, we have to acknowledge that any important technology has a potential downside. But we’re not here to whip people into a frenzy about it, but to work alongside those who are developing those technologies to make sure that doesn’t happen.”
In addition to AI, areas identified for closer examination include advances in biotechnology, the potentially catastrophic impact of biodiversity loss and “extreme tail climate change”.
We should think of CSER, says Dr O’Heigeartaigh, “as an insurance policy for a society developing more and more powerful technologies”.
“Most of the concerns we will look at are quite low-probability, but if the potential impact is big enough they deserve somebody to figure out what can be done to mitigate the impact of those worst-case scenarios.
“I sometimes joke that if we do our job correctly, you’ll never know we did anything because what we will have done is reduce the possibility of something from 5 per cent to 0.005 per cent, or something like that.”
Barely worth losing sleep over.
newsdesk@thenational.ae