Artificial Intelligence has moved to the top of the business agenda as companies rush to understand the benefits of AI and to leverage advances like generative AI and big data analytics for their operations. Alongside the hype, there is also a rising tide of concern about the potential negative impact of the technology. Some of the fears surrounding AI have been around for some time and range from concerns that AI will leave humans without jobs, to far-fetched ideas about killer robots. As awareness of the capabilities of AI has grown, so has the understanding of just how this fundamental technology could inadvertently cause harm as well as good. The technology industry, and governments, are realising that there is a genuine risk that unsupervised or unaccountable AI solutions could create issues in many different areas. Recently there has been a rush of government activity as policymakers rush to legislate against these potential harms. At the end of October 2023, US President Joe Biden issued an executive order on AI, which seeks to develop ‘guardrails’ around the technology. The president said: “To realise the promise of AI and avoid the risk, we need to govern this technology. The executive order was followed a few days later with the signing of the UK’s Bletchley Declaration by 28 countries including the UAE. This declaration is another drive for global coordination on AI regulation, stating that: “for the good of all, AI should be designed, developed, deployed, and used, in a manner that is safe, in such a way as to be human-centric, trustworthy and responsible.” Additionally, the UAE published its own AI Ethics Principles and Guidelines in December 2022, and many nations are taking a proactive approach to AI and legislation. It is also worth noting that the US executive order and the Bletchley Declaration recognise the potential value that AI can bring. But in the rush to preserve human rights, safety, privacy and so on from AI, we need to be careful that legislation does not stifle innovation. Making laws to govern technology can be challenging. The fast-moving pace of technology – and AI is moving exceptionally quickly – means it can be difficult for politicians to create policies and regulations that are not outdated or ineffective. Regulation also needs the flexibility to keep up with advances in the technology or else it can become a hindrance to growth. For cutting-edge technology such as AI, over-regulation could have a serious impact in several different ways. The biggest risk is hindering innovation. Technology thrives on developers, start-ups and companies being able to experiment and innovate, to try new ideas and new solutions. Researchers and developers need the freedom to be able to innovate – within reason – and overly strict regulations might discourage organisations from experimenting with AI technology. Excessive regulation can put the focus on compliance and companies may feel dissuaded to launch innovative projects that are likely to hit a regulatory barrier. Overregulation might also affect investment in the sector and make it more difficult for AI startups and smaller players to operate, which will reduce competition and innovation in the market overall. In terms of real-world adoption, companies may find that compliance with AI regulation is too expensive or too complex, slowing down the rate of adoption of AI technology, especially in fields like healthcare or education where there is potentially so much to gain. As with everything, getting the regulation of AI right will be a matter of balance, between innovation and protection. The Bletchley Declaration recognises the “importance of a pro-innovation and proportionate governance and regulatory approach” to AI, which is a good indicator that governments understand the delicate balance required to create effective AI regulation. The technology sector, along with stakeholders from other industries and academia needs to support government efforts to create flexible and adaptive regulations that will ensure a responsible, dynamic and innovative AI industry for the years ahead.