- As countries at the G20 unite to urge social media platforms to stop extreme content spreading online, Australia has released nine recommendations it believes will actually achieve it.
- From developing new technology to stop material surfacing, tinkering with existing algorithms to running test scenarios, the recommendations cover every part of social media.
- Prime Minister Scott Morrison has announced he will share the new report with G20 countries to inform their own national strategies.
Since March’s Christchurch massacre was streamed to thousands on social media platforms, there’s been plenty of talk about enforcing stricter standards online.
“(There is) a growing consensus internationally that the internet should not be a forum or tool for the proliferation of harmful content, and more needs to be done — particularly by the larger and well-resourced digital platforms — to make the internet a safer place,” the report said.
Australia’s Prime Minister Scott Morrison believes his strategy could help guide other countries. He suggested to reporters at the G20 that he would share the report around.
The report outlines recommendations to help combat terrorist and extreme violent material online, and follows a strikingly similar nine-point plan devised by social media giants, including Facebook and Google, in May.
While Facebook welcomed the report, it pointed out that it already has a significant head start on the government’s plan and has undertaken several reviews of its own.
“In May, we announced restrictions to who can use Facebook Live and met with world leaders in Paris to sign the New Zealand Government’s Christchurch Call to Action,” a Facebook spokesperson told Business Insider Australia.
“We also co-developed a nine-point industry plan in partnership with Microsoft, Twitter, Google and Amazon, which outlines the concrete steps we’re taking to address the abuse of technology to spread terrorist content.”
Google echoed its commitment to cleaning up its online platforms.
“Over the last few years we have invested heavily in human review teams and smart technology that helps us quickly detect, review, and remove this type of content. We are committed to leading the way in developing new technologies and standards for identifying and removing terrorist content,” a spokesperson said.
Although the Australian government’s plan mirrors that of the social media giants, there is one important difference: the social media giants expressed a willingness to research how online hate leads to discrimination and bigotry in the offline sphere, a strategy which is conspicuous in its absence from the government’s plan.
Here’s what the government has proposed:
1. Develop the tech to stop it ever going up
Preventing the material from ever reaching the platform is evidentially the easiest way to deal with the problem, the taskforce concluded.
In order to do just that, it puts forward platforms should develop the capabilities to automatically stop extreme content from being broadcast and share the details of blocked content to help others identify it.
Information about the material should be recorded and shared in a database of URLs, which users will be prevented from sharing.
Interestingly, Facebook and others should also be pressured to review their algorithms to stop users from seeing successfully uploaded content and “positive alternatives or counter-narratives” should be promoted instead.
2. Put moderators to work
Of the material that does make it online, social media platforms need to have the ability to quickly review what is online and remove what’s illegitimate.
The taskforce believes that should be prioritised alongside the removal of other abhorrent material like child abuse.
To do so, users need to be able to easily report anything they see.
3. Crackdown on livestreaming
Given it was at the heart of the Christchurch massacre, the livestreaming of video content remains a top priority.
To do so, the taskforce said accounts should be active for at least 24 hours before they can livestream, validation will be made more stringent and audience sizes will be limited.
Beyond that, streamers should be rated by the content they upload and accounts should be more closely monitored.
4. Get the government to ‘test’ tech
In order to get it right, the government is considering practice scenarios.
A ‘testing event’ will be held in 2019/2020 where a simulation will take place and the response by tech platforms to respond will be analysed.
Government bodies will also consult with social media platforms where possible about what they are seeing in terrorist circles and what to be aware of.
5. Block extreme content
The government will continue blocking domains that try to broadcast footage from Christchurch, as well as add new domains to the blocked list relating to new material.
6. Create an emergency response
A Crisis Coordination Centre should work 24/7 to notify relevant agencies of any terrorist or extremely violent activity so as to stop material circulating.
7. Measure progress
Social media platforms and government agencies need to be assessed and marked regularly in order to keep improving.
The recommendations note they will also be required to reveal at least twice a year how much content is being blocked.
8. Make social media giants overhaul their systems
Despite concentrated pressure to remove extreme material, the report recognises that sometimes there will be legitimate reasons to share it, for example when it’s of public interest or in order to condemn it.
Platforms will be expected to be able to know the difference and not to block legitimate users and content.
9. Maintain a strong international agency
In order to achieve all of this, the task force calls for a strong Global Internet Forum to Counter Terrorism (GIFCT).
The GIFCT was set up to create international online standards to combat the rise of terrorism-related material.
Australia now wants it to continue its work, particularly in the area of developing and sharing machine learning technologies.