A wave of new regulations aimed at managing <a href="https://www.thenationalnews.com/future/technology/2024/07/12/rise-of-the-robot-artificial-intelligence-sparks-explosive-progress-in-humanoid-machines/" target="_blank">the impact of artificial intelligence </a>is about to hit the corporate world across the globe. This month, <a href="https://www.thenationalnews.com/business/technology/2023/12/09/what-is-the-eus-new-ai-act-and-how-will-it-affect-the-industry/" target="_blank">the EU's AI Act </a>– a landmark piece of legislation to regulate the fast-evolving technology – has come into force across the 27-member bloc. The EU is leading the way, but it is not alone in the charge. A global regulatory push will unfold over the next two years, introducing a variety of new rules and requirements in many countries. However, the first – the EU Act – is regarded as the world’s most restrictive among the forthcoming rules and could set a global precedent. Multinational businesses are well-versed in handling international regulations, but the integration of AI into products and services, everything from search engines to advanced <a href="https://www.thenationalnews.com/opinion/comment/2024/06/13/whats-the-big-fuss-about-apple-integrating-chatgpt/" target="_blank">generative AI tools like ChatGPT</a>, brings a new level of complexity. AI regulations span at least 10 policy areas that include data protection and intellectual property, which vary from one jurisdiction to another, making managing of AI compliance one of the biggest challenges for the corporate world. Given the still-evolving nature of AI regulations, companies must approach compliance even more carefully than usual while trying to maximise the benefits of AI and avoid the risks of non-compliance. The advantages of AI include enhanced efficiency and innovation, while the risks of non-compliance entail legal penalties and reputational damage to the business. The St Gallen Endowment’s Digital Policy Alert – the public repository of policy changes affecting the digital economy – provides valuable insights into these new and sometimes landmark AI rules, and in the past year, the alert system has reported more than 440 new regulatory developments affecting AI. This surge in regulations has been driven by the rapid advancements in AI since the launch of ChatGPT in November 2022. The US, the EU and the UK are the most active jurisdictions, with 173, 94 and 52 regulatory actions, respectively. These cover policy areas tracked by the Digital Policy Alert including design and testing standards, and consumer protection, as well as data governance rules that require companies to ensure their AI training data sets do not misuse personal data. Other areas include competition and intellectual property, along with moderation rules that aim to prevent AI systems from generating harmful content and require companies to have mechanisms in place for its removal. Though various jurisdictions around the world are experimenting with AI regulations independently, effective compliance across the globe requires tighter co-ordination. Currently, only a few AI rules are well-known and understood by business leaders, including the now-in-force EU Act. America’s executive order on AI – signed by President Joe Biden in October – has received a mixed response from the corporate world. The big concern in the US regulatory action is that the executive order lacks the legislative enforcement provided in the EU’s Act. China, meanwhile, has enacted three specific AI regulations addressing generative AI, deep fakes and recommendation algorithms. And countries such as Argentina, Brazil, Canada and South Korea are also expected to introduce their own AI laws soon, adding to the global compliance burden on multinational companies that sell their products and services across borders. The red tape across countries and policy areas also adds to the complexity for companies facing an expanding array of regulatory actions. For instance, if a company operates AI-powered products and services across Asia, it must comply with China’s data regulations, India’s consumer protection laws and Australia’s online safety rules. Navigating this patchwork of national rules is already challenging and AI’s numerous relevant policy areas only amplify this complexity. This is a new challenge, even for multinational corporations experienced in dealing with established international rules. Different markets have their own emerging regulations across various digital policy areas, and enforcement agencies are applying them rigorously. This year, California suspended Cruise’s permit for autonomous vehicle deployment due to non-compliance with quality standards, dealing a blow to the self-driving-car subsidiary of General Motors. Enforcement primarily targets AI model developers such as Microsoft-backed OpenAI, rather than companies integrating AI into their operations. When ChatGPT was launched, it sparked numerous investigations by data protection authorities worldwide. Competition authorities are scrutinising partnerships between AI providers to ensure fair practices. US regulators are also investigating potential political bias in Google’s Gemini AI system. Such probes can lead to big fines for AI-driven platforms found to have broken the rules, as the GM-owned Cruise example illustrates. So, how should executives at such companies think about AI compliance strategies? For one, they must integrate AI compliance risk into their strategic planning. Also, navigating the AI regulatory maze will require vast resources that are, in many cases, already dedicated to managing existing data governance rules. A reallocation is now needed. Compliance teams must also understand the diverse challenges posed by varying rules across numerous regions and policy areas. This necessitates a broader focus beyond data protection to include all relevant policy areas of AI compliance. Additionally, operational teams must adapt to the specific requirements of each market to ensure technical compliance. Because as AI regulations continue to evolve and spread to new jurisdictions, so too will the compliance challenges. <i>Tommaso Giardini is the associate director of the St Gallen Endowment’s Digital Policy Alert, a strategic partner of the International Institute for Management Development in Lausanne, Switzerland</i>