Social media companies that fail to protect users from harm face bans and heavy fines in new proposals announced by the UK government. The policies were launched on Monday in a long-awaited white paper on online harm and would also make company directors personally accountable for the behaviour of users. A 12-week consultation will now take place on the UK proposals. It comes less than four weeks after New Zealand’s Christchurch massacre in which the terrorist live-streamed footage of the mosque shootings on Facebook for 17 minutes. The company said the artificial intelligence it uses to monitor content failed to flag it. Under the UK government's proposals, companies would be sanctioned if they fail to stop child abuse, or prevent users from viewing or sharing extremist content. The government intends to impose a “short, predetermined time frame” in which offensive posts must be removed. It also proposes a new statutory duty of care on social media companies, and the creation of an independent regulator. Prime Minister Theresa May said self-regulation among social media companies was at an end. “Online companies must start taking responsibility for their platforms, and help restore public trust in this technology,” she said. Home secretary Sajid Javid and Jeremy Wright, the UK culture secretary, said tougher measures were necessary as self-regulation was ineffective. “Voluntary actions from the industry to tackle online harms have not been applied consistently or gone far enough. Tech can be an incredible force for good and we want the sector to be part of the solution in protecting their users. However, those that fail to do this will face tough action,” they said in a joint statement. Last week, the UK’s security minister Ben Wallace said technology companies needed to do more to combat the rise of extremism online. He said the threat from far-right extremists was almost on a par with ISIS, with neo-Nazis using the same social media methods as other extremist groups to recruit supporters. Sara Khan, of the UK’s Commission for Countering Extremism, said that more needs to be done to tackle terror online. “Social media has been a game-changer for extremists. Tackling harmful but legal extremist content online must remain high on the agenda for a duty of care and a new regulator,” she said. “Social media has allowed extremists to connect, organise and promote their propaganda and disinformation at a frightening rate. “Despite repeated requests for self-regulation, the government has now had to step in to force companies to take action. We also need better conversations online, a positive use of algorithms, and a proactive and effective response to ensure extremists don’t exploit advertising on social media to further their cause.” The Australian government last week passed new legislation to impose large fines and prison sentences for social media executives if they fail to rapidly remove “abhorrent violent material” from their platforms. The British white paper outlines the government’s proposals to tackle abuse and extremism, and the sanctions will use to enforce them. Social media companies have faced criticism from the families of young people who committed suicide after viewing inappropriate content. Reports of child abuse online have also risen globally from 110,000 complaints in 2004 to 18.4 million last year. “For too long social networks have failed to prioritise children’s safety and left them exposed to grooming, abuse, and harmful content," Peter Wanless, chief executive of UK children’s charity the NSPCC, said. “It’s high time they were forced to act through this legally binding duty to protect children, backed up with hefty punishments if they fail to do so.”