Facebook’s snap decision to pull news off its platform shows it could do the same with hate speech, medical misinformation and fake news, leading cyber experts have said.
The dramatic decision to block Australian users from sharing or viewing news content on the platform was a protest against the government’s proposed regulation that would see Facebook pay some publishers for content.
The move has enraged critics, including Prime Minister Scott Morrison, who denounced the action as “arrogant”.
“Facebook’s actions to unfriend Australia today, cutting off essential information services on health and emergency services, were as arrogant as they were disappointing,” he said on Friday.
But the snap shut down also raised another question – if Facebook can do it to news pages, why can’t they do it for ones that peddle misinformation and hate speech?
“This is showing us they can shut down sites or sources of content readily,” said Dr Zac Rogers from the JBC Digital Technology, Security and Governance.
“There’s no reason why they couldn’t shut down sources of misinformation readily as well.”
Dr Rogers said the way it worked was complex, but Facebook could have either created an automated program that shut down the news pages or had them taken off by hand.
“There’s complexity, but it shows us how powerful these gatekeepers are,” he said.
Australian Facebook groups sharing fake news about COVID vaccine development, including spurious claims that the treatments have not been adequately tested or properly authorised, were all active on the site on Friday afternoon. So, too, were peddlers of political conspiracies accusing political leaders of being friends with pedophiles, plus groups pushing the “white genocide theory” .
Profiting off hate
There is no escaping the fact that Facebook profits off hate speech, Dr Rogers said.
“We know extremism and things that push the boundaries create a lot of clicks,” he said.
The content that generates the most clicks helps Facebook mine data from its users, which in turn it sells on.
So the posts that create outrage, or the most engagement, are the most valuable to the company Dr Rogers said.
“Facebook profits from selling user data into the ad tech market. It is the company’s biggest revenue generator,” he said.
“If you want to connect the dots, hate speech is a bigger data generator than more moderate content.”
In an open letter in July last year, Facebook’s Nick Clegg, VP of Global Affairs and Communications, said the company did not “benefit from hate”.
“Billions of people use Facebook and Instagram because they have good experiences — they don’t want to see hateful content, our advertisers don’t want to see it, and we don’t want to see it. There is no incentive for us to do anything but remove it,” he wrote.
But the company and its moderation standards have come under increasing fire.
In one infamous case, the platform removed the famous Vietnam-era war photo of a naked girl fleeing a napalm attack, but decided posts that denied the Holocaust were okay to stay up – until late last year.
Chairman of the Anti-Defamation Commission Dr Dvir Abramovich has long campaigned against Facebook allowing hate speech on its platform.
“For too long, Facebook has allowed the most base impulses that lurk at the fringes of our society to find a large audience and to enter into the bloodstream of too many people,” he said.
“Bigots do not deserve a microphone.”
He said Facebook had taken positive steps to curb some violent groups but more needed to be done to make sure the site was safe for everyone.
“As long as corporations keep pouring money into advertising on Facebook without holding the company to account for its leadership failures on this issue, we will all be less safe.”