Any smartphone owner or Google user is already intimately connected with artificial intelligence, but knowing what that means is a different matter. AI’s ubiquity has not yet translated to a corresponding understanding of what and how this revolutionary technology system works, according to a pioneer in the industry. "I think the challenge for us is it's both everywhere and it's kind of receding into the background and people are not necessarily aware," Sir Nigel Shadbolt, one of the UK's pre-eminent computer scientists, tells <em>The National</em> from his home in Oxford. “AI is a totally pervasive technology. It literally has become a new utility. We don't recognise it that way but the supercomputers we carry around in our pockets - our mobile phones - are running all sorts of AI-inspired and directly AI-implemented algorithms to recognise your voice or recognise a face in a photo you've just taken and label it, or when it's reaching back into the cloud services to decide what to recommend to you, or how to route you efficiently to your next meeting. These things are all running." The professor in computer science at Oxford University likens our relationship with AI to that with electricity: we’re highly dependent on it without a full understanding of the complex engineering feats behind a power grid. Mainstream AI is a process of combining datasets and algorithms, or rules, to develop predictive patterns based on the data provided. To the purist, AI is a machine or algorithm which can perform tasks that would ordinarily require human intelligence. AI is used for geographical navigation, Google searches, video-gaming and inventory management. Perhaps most universally, AI is used as “recommender systems” in social media platforms, on-demand video streaming services and online shopping platforms to tailor content and suggestions for users according to historical preferences. The more information that is gathered, the more machine learning accelerates. “There is a duty for us to explain fundamentally what the basic principles are and what the issues are from the point of view of safety, of fairness, of equity, availability of access, these have a moral dimension to them,” says Mr Shadbolt. For many people, artificial intelligence conjures up images of <a href="https://www.thenationalnews.com/world/europe/eu-s-plan-to-control-artificial-intelligence-will-have-positive-global-impact-1.1209579">robotic humanoids or complex technology used by big tech giants to influence us.</a> While this may be accurate in part, the fundamental misperceptions are widespread. "I sometimes reflect on the fact we might be moving back to almost an animistic culture where we imagine there's kind of a magic in our devices we don't need to worry about," Mr Shadbolt tells <em>The National. </em> He has worked alongside Sir Tim Berners-Lee, inventor of the worldwide web, since 2009. In <a href="https://theodi.org/about-the-odi/contact-us/">2012, the duo went on to set up the Open Data Institute</a> which works with companies and governments to build a transparent, trustworthy data ecosystem. “Data is kind of an infrastructure just like your roads and your power grid but you can't see it. It's invisible in a certain sense, but you know it's important and building that kind of infrastructure is hugely important,” says Mr Shadbolt, who was knighted in 2013 for his services to science and engineering. Since the ODI was established, many national governments, regional authorities and public and private companies have gone on to publish their data online. In some countries, like France, the commitment to open public data is now enshrined in law. The pandemic naturally pushed to the fore the importance of data, from the UK government's dashboard on hospital admissions rates to its track-and-trace system, information gathering and sharing was paramount in combatting the virus. With such pervasive influence on our lives, Mr Shadbolt says there is a growing renaissance of interest in the field of ethics and AI. Civil rights groups have called for the banning of facial recognition software over fears that the system encroaches on privacy through mass surveillance as well as reinforcing racial discrimination. There are also concerns that these complex learning models can be fooled. Earlier this year, a <a href="https://www.schwarzmancentre.ox.ac.uk/ethicsinai">new Institute for Ethics in AI was created at Oxford University with </a><a href="http://www.schwarzmancentre.ox.ac.uk/ethicsinai">Mr Shadbolt</a><a href="https://www.schwarzmancentre.ox.ac.uk/ethicsinai"> as its chair</a>. He says the institute's aim is to examine the fairness and transparency of the many uses of AI so that they "empower and not oppress us". “The algorithms and the data of scale can be really transformational. But, on the other hand, we need to reflect on the fact that there'll be two questions we've been talking about - about just how is that data used, and is it fair representation and have has the population consented?” Co-author of <em>The Digital Ape: How to live (in peace) with smart machines, </em>Mr Shadbolt says it is an ongoing conversation with science technologist and engineers on the one hand and legislators and ethicists on the other. "Because these things, at the end of the day, express our values, what we think are important to seek to preserve in the societies we build," he points out. The Facebook-Cambridge Analytica scandal and the numerous online data breaches of other companies have undoubtedly contributed to increasing public awareness about the perils of handing over personal information. A recent study by Penn State University researchers in the US suggests that users can become more willing to give information when AIs offer or ask for help from users. Nevertheless, fears around the uses of AI extend beyond its access to personal data to forecasting what a truly intelligent machine might be capable of. Scientists at the Center for Humans and Machines at the Max Planck Institute for Human Development in Berlin recently said that human control of any super-intelligent AI would be impossible. AI has been steadily developing since the Second World War and the code-breaking Turing machine. It took a major leap forward in 1996 when world chess champion Garry Kasparov said he could “smell a new kind of intelligence across the table” from the IBM supercomputer Deep Blue. Kasparov's defeat is often identified as a symbolic turning point in AI catching up with human intelligence. Nineteen years later, the power of AI made an exponential advance when AlphaGo became the first computer programme to defeat a professional human player at Go, the complex and challenging 3,000-year-old Chinese game. The pandemic has accelerated the adoption of AI across sectors, particularly in healthcare, pushing it more towards becoming a necessity. In England, AI systems were used to screen patients’ lung scans for Covid-19 and to sift through hundreds of research papers being published on the new virus. “AI received a battlefield promotion as the crisis forced the pace of innovation and adoption,” said David Egan, a senior analyst at Columbia Threadneedle Investment, at a recent forum to discuss investor opportunities in the field. “Companies that are more open to adopting AI are likely to do better and the benefit to those companies will compound at an exponential rate each year.” Having surveyed the field for decades, Mr Shadbolt thinks now is the time to take hold of this "great opportunity" while also taking stock of the "bigger questions". “Technical development has to go hand in hand with an appreciation of our values, why we're doing this, what kind of society we want to build, where we want decision making to reside, where the value of all this insight actually ends up landing.”