Britain on Tuesday will announce a crackdown on technology companies that fail to protect people from exposure to illegal content such as that relating to child sexual abuse, terrorism or suicide. Companies failing to protect people face fines of up to 10 per cent of turnover, or up to £18 million ($24m), whichever is higher. They may also have their sites blocked and the government will have the power to apply sanctions on senior management. Digital Secretary Oliver Dowden and Home Secretary Priti Patel are set to announce the government’s final decisions on the laws on Tuesday. The regulations, which will allow users to post their own content or interact, will apply to any company in the world hosting user-generated content online that is accessible to people in the UK or enables them to privately or publicly interact with others online. It includes social media, video sharing and instant-messaging platforms, online forums, dating apps, commercial pornography websites, and online marketplaces, peer-to-peer services, consumer cloud storage sites and video games that allow interaction. Search engines will also be subject to the new regulations. The legislation will include protection for freedom of expression and pluralism online, allowing people to take part in society and engage in robust debate. But the new laws will not affect articles and comments sections on news websites, and there will be additional measures to protect free speech. Tech platforms will need to work harder to protect children from being exposed to harmful content or activity such as grooming, bullying and pornography. The most popular social-media sites, with the largest audiences and high-risk features, will need to set and enforce clear terms and conditions that explicitly state how they will handle content, which is legal but could cause significant physical or psychological harm to adults. This includes <a href="https://www.thenationalnews.com/world/uk-wages-cyber-war-against-anti-vaccine-propaganda-spread-by-hostile-states-1.1108527">dangerous disinformation about coronavirus vaccines</a>, for example. “We are giving internet users the protection they deserve and are working with companies to tackle some of the abuses happening on the web," Ms Patel said. “We will not allow child sexual abuse, terrorist material and other harmful content to fester on online platforms. Tech companies must put public safety first or face the consequences.” The government plans to bring the laws forward in an Online Safety Bill next year. Different tech companies will be in different categories, depending on how how large and high risk they are considered to be. A small group of companies with the largest online presences and high-risk features, which are likely to include Facebook, TikTok, Instagram and Twitter, will be in Category 1. These companies will need to assess the risk of legal content or activity on their services with “a reasonably foreseeable risk of causing significant physical or psychological harm to adults”. They will then need to make clear what type of “legal but harmful” content is acceptable on their platforms in their terms and conditions and enforce this transparently and consistently. All companies will need mechanisms so people can easily report harmful content or activity while also being able to appeal against content being taken down. These companies will also be required to publish transparency reports about the steps they are taking to tackle online harm. Examples of Category 2 services are platforms that host dating services or pornography, and private messaging apps. Less than 3 per cent of UK businesses will fall within the scope of the legislation and by far most of the companies will be Category 2.