Ethics watchdog: Facebook, Google, and Twitter should be prosecuted for illegal content

  • Facebook, Google, Twitter, and other web firms are likely to face new sanctions in the UK about hosting abusive content on their platforms.
  • The government’s ethics committee has recommended that prime minister Theresa May sues the web giants if they can’t delete abusive posts in time.
  • The committee also thinks Facebook, Google, Twitter, and other social media firms should be “liable” for what they publish – a major shift.
  • Facebook said the report showed there was a “fundamental problem” of abuse, which was “hard to tackle.”

Prime minister Theresa May has been advised to sue Google, Facebook, Twitter, and other tech firms who don’t delete abusive content, in a new report from the government’s ethics watchdog.

The term “abusive” could mean racist, sexist, terrorist, and other hateful posts.

In a major shift, the Committee for Standards in Public Life argued that web giants should be “liable” for the content they publish. Google, Facebook, and Twitter currently argue that they are just platforms and not responsible for the content hosted on their sites the way normal publishers are.

Here are three most important recommendations from the report:

  • The government should speed up legislation to make tech firms liable for illegal online content.
  • Tech firms should make fast, consistent decisions about taking down posts that intimidate politicians.
  • The government should create a new law which makes intimidating MPs, candidates, and campaigners an office.

The report is part of an inquiry into intimidation of MPs during elections, much of which takes place on social media.

It argues that social media firms currently don’t proactively search for illegal content under EU rules. The UK can change that once it leaves the European Union.

Importantly, the report doesn’t claim social media companies “should be considered fully to be the publishers of the content on their sites.” If Twitter, Google, and Facebook were reclassified as publishers, they would be liable for infringing content, and open to harsher regulation.

Chair Lord Bew said: “The increasing scale and intensity of this issue demands a serious response. We are not alone in believing that more must be done to combat online behaviour in particular and we have been persuaded that the time has come for the government to legislate to shift the liability for illegal content online towards social media companies, and to consult on the introduction of a new electoral offence.”

Here’s what Facebook said in response to the report:

“We want Parliamentarians and election candidates to feel safe on Facebook. During the 2017 UK General Election we offered advice and training to over 5000 candidates on how to report abusive content and keep their accounts secure. We also provided a dedicated rapid response channel so that we could respond to their concerns quickly, and tried to educate as many candidates on this useful tool once the snap election was called. “This report shows there is a fundamental problem of disrespect and abuse in our politics, which is as hard to tackle online as it is offline. We agree with Lord Bew’s report that this is a shared challenge which everyone needs to do more to address. We’re making significant investments in hiring more people who understand the issues around candidate safety and continue to work with organisations such as The Fawcett Society and Reclaim The Internet to improve how we tackle this kind of abuse.”

Twitter’s policy chief Nick Pickles said:

“Abuse and harassment – no matter the victim – have no place on Twitter. As the report notes, our team uses technology to proactively find abusive content and provides users with a single report that they can email to the police.

“We’ve also introduced a suite of new tools for users to control their entire Twitter experience. We’re now taking action on ten times the number of accounts every day compared to the same time last year and using new technology to limit account functionality or place suspensions on thousands more abusive accounts. We’re also launching more updates both this week and next that are aimed at continuing to expand the scope of our safety efforts across the platform.

“We remain committed to playing our part in the electoral process and working with political parties to support candidates, as well as working with the police and parliamentary authorities to facilitate their vital work.”

Google has not responded to a request for comment.

May has already proposed sterner rules for web firms, particularly when it comes to taking down terrorist content.

The committee report follow aggressive investigations by the press into the kind of content Twitter, Facebook, and Google are willing to put up with. A Tuesday report in The Sun found “hundreds” of Twitter users use the site to talk about their sexual urges towards children.

The Sun examined accounts that describe themselves as “non-offending” and “anti-contact” – terms used by self-confessed paedophiles who say they have never harmed children. The newspaper found some accounts of the talking about children, though it did not seem to find any examples of anything illegal or explicit. A Twitter source told the newspaper the information had been handed to the police, and that police had not requested further data.

Elsewhere, the BBC found Amazon-owned gaming site Twitch was hosting sexualised content, while YouTube has been under fire for several weeks over inappropriate content aimed at children.

NOW WATCH: Tech Insider videos

Want to read a more in-depth view on the trends influencing Australian business and the global economy? BI / Research is designed to help executives and industry leaders understand the major challenges and opportunities for industry, technology, strategy and the economy in the future. Sign up for free at