Regulatory harmonisation – the practice through which tech regulators align policies and procedures across markets – has been a trend since the end of the Second World War. It is heralded as a tool that enhances trade, ensures product safety, fosters innovation, and even increases mutual dependence and, thus, promotes world peace. The EU is an evolving example of what can be achieved through harmonisation. It also lays bare the limits of this practice. For it is no longer clear, even as regulations grow worldwide, that harmonisation is always desirable, or indeed realistic. In fact, some of the biggest names in tech argue that technological progress should be paused, and countries are now imposing restrictions on one another’s innovations. The US, for example, prohibits semiconductor chipmakers from selling advanced chips to China; and Italy, among other countries, has blocked access to ChatGPT. We live in a world shaped by rising nationalism and widening inequalities, which presses us to address a critical question: How can we build a digital world that is safe and beneficial for all? Is it enough to call for, say, China and the US to adopt the EU’s rules on digital services and artificial intelligence, while China and the EU adopt American financial regulations? We don’t think so. In fact, we argue that calls for regulatory harmonisation to “tackle collectively” the risks posed by technology are misguided if the goals of the intended regulation and the values that are key to successful implementation are not examined prior to such calls, or at least simultaneously with them. Any continued push for harmonisation without an agreement on goals and values will prove counterproductive and risky. It is this debate about goals and values that needs to take place regarding global technology. We face dire consequences if we don’t get global technology regulations right, which absence of this debate will lead to. It’s no surprise that the World Economic Forum’s 2023 global risk report warns that technology will “exacerbate inequalities” and that cybersecurity threats will “remain a constant concern” for the future. Meanwhile, the United Nations Human Rights Office reports that new technologies – specifically spyware, surveillance technology, biometrics, and AI – “are being misused worldwide to restrict and violate human rights”. In fact, leading tech figures such as Elon Musk and former Google chief executive Eric Schmidt are now convinced that humanity’s survival is at stake stake if we do effectively govern technological progress. The second risk of harmonising regulations pertains to implementation. International regulators often work together to craft similar guidelines and technical requirements. Yet not all jurisdictions achieve the same desired outcomes. As we can easily imagine, organisations will lobby for terms that serve their own interests. Enforcement and implementation also tend to be uneven across regions, countries, and even among regions in the same country. In this aspect one could look to Switzerland as one example of effective regulation. The Swiss government delegates most regulatory authority to the cantons. At the local level, goals and values are more easily shared and understood, hence people are less likely to violate or circumvent laws. . Things work well because regulation is decentralised and adapted to the culture of the region within an overall federal framework. Conversely, lawbreakers rationalise their actions by accusing regulators of lacking an understanding of their goals or their ways of working. Take the financial sector for example. Prudential regulation aims to ensure the stability of both financial institutions and the economy. They do this by mandating control mechanisms for risk management at a macro level. Yet, some bankers repeatedly come up with creative ways to increase their financial gains – personal or corporate – while concealing risks. The global financial crisis and, more recently, the Silicon Valley Bank collapse and the demise of Credit Suisse are examples of how well-intended regulations can fail. They also reflect the gap between the spirit of laws and their impact on different actors, each of whom is driven by the pursuit of their own goals and values. There is another, perhaps bigger, problem with aligning regulations: Laws can be copied, but the copy leaves the spirit behind. Laws can be copied, cultural contexts cannot. Different cultural contexts will affect how the laws are implemented and enforced. There is a further risk about how this problem could play out on the global stage: Nations adopt the regulations of the others to spur trade and investments, only to drop those rules once they have sufficient size and clout. If that happens, legal harmonisation will have created a new and fragile global power balance. This could lead to unpredictable potential consequences. Some are outright frightening, including the weaponisation of AI systems as Trojan horses. To mitigate these problems and ensure that regulations are effective across diverse markets, we must foster trust and commitment in these markets and across the regions where these markets are operating. Agreements on the values and goals that will drive the laws and regulations, as well as their implementation is thus key. We should never lose sight that regulations are only mechanisms or instruments, only logical then to start by discussing and agreeing on the end. If people believe and understand the intended aims of the regulations, the values that underpin them and will be called upon in their implementation, they will be greatly more likely to be complied with, and trust in the regulation and in the regulators will grow commensurately. And, by reciprocity, regulators will trust the people more. This principle holds true across the board, for all governance actors, whether governments, multilateral organisations, or companies. Goals set clear perimeters for what regulations are meant to achieve. Their clarity is fundamental to effective governance. For example, the EU’s Digital Services Act aims to protect online users from disinformation, harmful or illegal content, by increasing oversight of online platforms while also fostering innovation for greater effectiveness. These goals are not country or region specific; hence it shouldn’t surprise us that all EU countries adopted the Act, a remarkable feat for the bloc. Values capture the main underlying drivers of behaviours, both of the regulators and the regulated. Alignment of values with goals is essential if the goals are to be achieved. For technology, values may range from privacy and freedom of expression to innovation and safety. The OECD AI Principles are a good example. A century ago, philosopher Bertrand Russell, extolled what he saw as Chinese virtues: respect for both individual dignity and public opinion, a love for science and education, and an aptitude for patience and compromise. Russell, with remarkable foresight, cautioned the West against expecting China to bend to their will – advice that is eerily relevant today in the context of global co-operation in regulating tech. “If intercourse between western nations and China is to be fruitful, we must cease to regard ourselves as missionaries of a superior civilisation.” This, course, applies to all regions and cultures, everyone would be wise to take the warning of Russell seriously. There are, and should not be, any “missionaries” today. There is only a collective mission we should all align with: ensure the safety and well-being of the world, and protect it from any systemic risks – for example, climate, geopolitical, or technological. As Schmidt argues in a recent commentary on technology and geopolitics in <i>Foreign Affairs</i>, we are locked in a global competition not just amongst nations, but also systems. “At stake is nothing less than the future of free societies, open markets, democratic government, and the broader world order,” he writes. Of course one should replace the values Schmidt’s promotes with ones we all collectively aspire for and agree upon, while making sure our common values also serve our common mission on this planet: arguably safety and sustainability need to be part of that. Schmidt’s comments reflect a unilateralism that, in our view, is ill-suited to deal with the threat that is posed by AI. The world, led by China and the US are pursuing this road. Instead, these countries – perhaps facilitated by the EU – ought to engage each other on agreeing on shared goals and values that are the basis for countering the existential threat to humankind, second only to climate change. The answer to averting a tech-driven armageddon is neither a pause in technological innovation nor regulatory harmonisation in isolation. Instead, alignment of and commitment to global goals and values will be the paramount drivers of co-operation and effective regulatory implementation. The answer to averting a tech-driven Armageddon for us is neither the pause in technological innovation that some call for, nor regulatory harmonisation, when achieved in isolation. Instead, alignment of and commitment to global goals and values will be the paramount drivers of co-operation and effective regulatory implementation. The United Nations was formed after the Second World War towards this end. Growing divergence of goals and values among UN members today poses a grave risk to the organisation’s mission. It has become a forum for states to fuel nationalism and further their own national or regional goals. The macro goal of the UN should be to save the planet, as it started to do with the UN Global Compact. It now needs to move forward by inducing more ambitious action on both climate change and the challenge posed by AI. Hopefully we will spare a major tech-driven crisis to truly start aligning our goals and values. We should do so proactively, by establishing – for a start – a new tech-specific global organisation or UN agency where such alignment can emerge and be built. It will not only make the world a safer place, but ensure our survival. <i>Theodoros Evgeniou is a Professor of Decision Sciences and Technology Management at Insead</i> <i>Ludo Van der Heyden is the Insead Chaired Professor of Corporate Governance and Emeritus Professor of Technology and Operations Management</i> A version of this article was first published in Insead Knowledge