Facebook is preparing a global livestream crackdown on its 2.3 billion users with some to be blocked from live broadcasting videos ahead of a meeting with the Australian government, as the US-based tech behemoth continues to grapple with the fallout of the Christchurch massacre.
Sources close to the social media platform said those who broke the site’s rules, such as spreading hate speech on Facebook, would be stopped from going live under the changes.
The new restrictions will likely be introduced before a government-led taskforce meeting later this month and ahead of the $500 billion business’ first quarter earnings call.
Facebook previously met with the government, other social media companies and internet providers as one of several meetings to work through the online aftermath of the Christchurch terrorist attack.
This comes a month after a terrorist attack in New Zealand was livestreamed by the alleged gunman on Facebook. It took about an hour for the video to be removed and the footage was then re-uploaded millions of times across a range of social media platforms, which all struggled to stop the spread of the video.
Since the attack, there have been widespread calls for limits on livestreaming and Facebook has been criticised by global authorities for not making changes sooner.
The Australian government has been among those taking a tough stance against Facebook, pushing through new legislation threatening billions of dollars in fines and prison sentences for executives should websites not act fast enough to remove terrorist and objectionable footage from their platforms.
The government also organised a series of meetings with Facebook, Google, Twitter and telecommunications companies to come up with a plan to handle future terrorist events involving content shared online. The next meeting is on April 24. At earlier meetings, the government suggested imposing a delay ahead of users going live.
The upcoming clampdown will be similar to what Facebook chief operating officer Sheryl Sandberg said the company was “exploring” in an open letter published in the NZ Herald last month, sources said.
Those who break the platform’s rules on hate speech are likely to be penalised by the changes in particular and be restricted from going live.
In the past, the social media giant has come under fire for allowing advertisers to target users with an interest in extremist ideologies like “fascism” on its platform and for enabling the spread of hate speech and misinformation. Facebook recently banned any content that could be considered white nationalist or white separatist.
Google brought in tougher changes to its livestreaming on video platform YouTube earlier this month, re-introducing a requirement for users to have 1000 subscribers before they can go live when using a mobile device. In 2017, when mobile livestreaming was introduced on YouTube, users needed to have 10,000 subscribers.
Channels have to be verified with no restrictions in the past 90 days to use this feature, with livestreaming disabled for those who breach copyright or break content rules. Users with less than 1000 subscribers can still go live using other devices like a webcam.
Sources close to the search giant said the decision was not solely due to the Christchurch incident and came after a broader review.
A YouTube spokeswoman said the platform “regularly re-evaluate[s] our eligibility requirements and decided to raise the threshold for streaming”.
Twitter has not made any announcements about changing its livestream options.
This article was originally published by the Sydney Morning Herald. Read the original here.
Business Insider Emails & Alerts
Site highlights each day to your inbox.