The European Commission has put human autonomy and accountability at the heart of new guidelines to regulate artificial intelligence and ensure the public trusts the technology. The EU initiative sets out seven essential requirements for AI and follows a global debate on whether companies should prioritise ethical concerns over business interests. Brussels hopes their guidelines will quell concerns among EU citizens about the technology, while giving European companies a competitive edge in the industry that will boost global exports. “We do not want to stop innovation, but the added value of the EU approach is that we are making it a people-focused process. People are in charge,” EU commissioner for the digital economy Mariya Gabriel said. Regulations were designed to ensure algorithms did not discriminate on the grounds of age, gender or race. However, EU officials have said repeatedly that there are no plans to move beyond non-binding guidelines and issue legislation on AI. In December last year, an ethics guideline was drafted by an independent expert group, after taking into account more than 500 comments received through the European Alliance, a forum for companies, public administrations and organisations to engage in discussions with the experts drafting the regulations. “The ethical dimension of AI is not a luxury feature or an add-on. It is only with trust that our society can fully benefit from technologies," the commission's digital chief Andrus Ansip said. The guidelines aim to ensure the following requirements are met: The Commission will launch a pilot phase this summer involving a wide range of stakeholders. Early next year, the expert group will review the requirements and propose any next steps. IBM Europe chairman Martin Jetter, who was part of the group of experts, said the guidelines “set a global standard for efforts to advance AI that is ethical and responsible".