The eSafety office raised concerns about the role of this material in ‘online radicalisation.’
Australia’s eSafety Commissioner has called for more regulation due to concerns about the spread of “terrorist and extremist” material on social media.
This came after Australia’s terrorism alert level was raised from possible to probable amid online radicalisation and tension in the Middle East.
This includes the misuse of live-streaming, algorithms, and recommender systems.
“The tech companies that provide these services have a responsibility to ensure that these features and their services cannot be exploited to perpetrate such harm, which goes to the heart of eSafety’s safety by design principles and mandatory industry codes and standards.”
Social Media Can ‘Spill Over Into Real Life’
Recent riots on the streets in the UK have demonstrated how online material can be incite real life conflict and harm, eSafety said.
Terrorist attacks in Christchurch, New Zealand, Halle, Germany, and Buffalo, New York, were noted as examples of social media being “exploited” by violent extremists.
eSafety revealed it was still receiving reports that perpetrator-produced material from these attacks, including Christchurch, had been shared on “mainstream” social media platforms.
“In March, we sent transparency notices under Australia’s Online Safety Act to platforms,” eSafety said.
“Google, Meta, Twitter/X, WhatsApp, Telegram, and Reddit—to find out what they are doing to protect Australians from terrorist and violent extremist material and activity. We will publish appropriate findings in due course.”
However, eSafety explained that it was still unclear what commitments the tech companies are fulfilling, despite transparency being a major pillar of the Global Internet Forum to counter terrorism and the Christchurch call.
“It is of great concern that we do not know the answer to a number of fundamental questions about the systems, processes and resources that these tech behemoths have in place to keep Australians safe,” the commissioner said.
The eSafety office said none of these big companies had provided this information via a voluntary framework, arguing this is why regulation is needed.
“This shows why regulation, and mandatory notices, are needed to truly understand the true scope of challenges, and opportunities to make these companies more accountable for the content and conduct they are amplifying on their platforms,” eSafety said.
Political Leaders Respond to Terrorism Threat Online
Meanwhile, Prime Minister Anthony Albanese also weighed in on the threat of extremist content online during a press conference on August 6.
He noted the various ways government is helping individuals disengage from violent extremism.
“We’ve brought forward the review of the Online Safety Act, which is important, and we’re embarking on that national conversation about age limits for social media. We also are engaged in our national intervention program. ”
“It’s based upon the view that it is more than a 50 percent chance there will be either a terrorist act or a planned act in the coming year,” he said.
“We know there have been eight events in recent times that have either been conducted or have been attempted or planned.”
Meanwhile, Opposition leader Peter Dutton encouraged the Australian community to report unusual behaviour to ASIO or the Federal Police in an interview with Sky News on August 6.
In response to host Laura Jayes’ concerns about “conspiracy theorists and anti-vaxxers who are finding support in dark corners of the internet,” Dutton acknowledged the significant mental health impacts resulting from the strict lockdown measures.
Global ‘Free Speech Threat’
Meanwhile, eSafety Commissioner Inman Grant received global media attention after issuing a global takedown order to Elon Musk’s X Corporation. In June, she withdrew the federal court action against the tech giant.
However, the Institute of Public Affairs raised concerns that she was a “global free speech threat” amid this battle with X.