- Executives at Facebook, Google, and Snapchat might be held personally responsible if their firms don’t delete harmful content off their services, according to UK government plans seen by The Guardian.
- A government policy paper, to be published Monday, is expected to introduce a much stricter regulatory regime for tech firms.
- The UK will establish a new regulator that will have the power to fine tech firms billions if they don’t obey a new mandatory code of conduct.
- Australia is also mulling big fines and even threatening jail time in cases where big tech firms don’t tackle violent material on their sites.
- Visit Business Insider’s homepage for more stories.
Executives at Facebook, YouTube, and Snapchat, and other big tech companies may be held personally responsible for harmful content on their services, according to UK government plans seen by The Guardian.
Details are scant, but the report suggests individual executives at the major tech firms will be held personally liable if their companies don’t delete content relating to terrorism, child abuse, self-harm, and suicide.
It is not clear exactly what this personal liability will entail, but the idea of criminal convictions has been floated. “We will consider all possible options for penalties,” Jeremy Wright, the UK’s culture secretary, told the BBC in February.
The British government is due to publish a policy paper on Monday, which is expected to radically toughen up how tech is regulated in the UK. Business Insider understands that the government’s plans are still in draft and will not be finalised until this weekend.
The UK’s digital minister Margot James told Business Insider in late February that the government would introduce a new tech regulator, which would have the power to impose massive fines on the likes of Facebook and Google if they don’t rid their platforms of harmful content.
James said the fines could be up to 4% of a company’s global turnover, meaning they could hit the billions of dollars in the most severe cases.
According to The Guardian, the government will initially ask the existing media regulator Ofcom to police the tech firms. Eventually, it will create an independent regulator, to be funded by a levy on tech firms.
The plans will cover not just social media platforms like Facebook, but online messaging services and even file-hosting sites.
The UK government paper comes as governments around the world grapple with how to deal with the proliferation of hate speech and other harmful content online. The internet has historically been seen as beyond regulation but certain shifts have emboldened governments to act.
One is that power has coalesced around a few dominant, public American companies, namely Facebook, Google, Amazon, Twitter, and Snapchat, who can be brought to heel.
The second is issues such as the proliferation of hate content online, and the way young people use the internet. Facebook and other firms came under pressure after the Christchurch mosque shootings were livestreamed on Facebook and disseminated across other platforms. And the parents of a British teenager who killed herself, Molly Russell, said she did so partly because of self-harm content she had seen on Instagram.
Australia’s government is also mulling strict new rules that could mean jail time for tech executives at firms that don’t delete “abhorrent violent material”, according to CNET. Those laws were likewise developed in the wake of the Christchurch shootings. The suspected shooter, Brenton Tarrant, is an Australian man.
A spokesman for the UK’s Department for Digital, Culture, Mediak, and Sport said in a statement: “We will shortly publish a White Paper which will set out the responsibilities of online platforms, how these responsibilities should be met and what would happen if they are not. We have heard calls for an internet regulator and to place a statutory ‘duty of care; on platforms, and have seriously considered all options.”
Business Insider Emails & Alerts
Site highlights each day to your inbox.