Facebook is preparing to tackle interference campaigns aimed to misinform and manipulate voters ahead of the EU’s elections in May.
Last year the company acknowledged that social media could have a damaging impact on democracy and admitted it was “too slow to recognise” Russian attempts to interfere in the US presidential election.
To protect its users from similar interference in the upcoming EU elections Facebook has announced a new range of fact-checking features.
These features won’t prevent people from posting false stories, but aim to stop them from going viral and provide users with enough information to identify them.
Facebook has 21 partners for fact-checking stories in 14 European languages, composed of different organisations which belong to the International Fact-Checking Network, run by the Poynter Institute journalism school in the US.
When these groups rate a story as false it will be shown lower in the news feed, which “reduces the spread of the story and the number of people who see it”, the company argued.
“In our experience, once a story is rated as false, we’ve been able to reduce its distribution by 80%,” claimed the company’s product manager Antonia Woodford.
“Pages and domains that repeatedly share false news will also see their distribution reduced and their ability to monetise and advertise removed. This helps curb the spread of financially motivated false news.”
An expanded context button will also show more details about the source of an article, revealing if pages have historically shared misinformation, and identifying where publishers link to their editorial standards and corrections policy.
The importance of tackling fake news, or propaganda, was stressed following the surprising election of Donald Trump in the 2016 election in the US.
Facebook founder Mark Zuckerberg appeared before senators last April after the company admitted as many as 87m users had their data used by Cambridge Analytica, a firm working on the Trump election campaign.
At the time, the 33-year-old billionaire took the blame for the massive data breach, and said that the company had failed to understand its platform could be used for harm as well as good.
In a study examining Brexit-related activity on Twitter, researchers from cyber security firm F-Secure identified fake accounts had been attempting to influence both sides of the debate.
However, the firm found that astroturfing – the practice of faking grassroots support for a cause or subject – was “far more prominent in Leave conversations” than on Remain’s side.
In February 2018, the US filed charges against 13 employees of a Russian troll factory and accused them of attempting to interfere with the 2016 presidential election.
It found that the troll factory had been conducting disinformation operations across all of the major social media platforms, including YouTube, Facebook, Instagram, and Twitter.
While the indictment remains the premier legal document regarding fake accounts on social media, little information is publicly available on platforms other than Twitter, as they do not allow researchers to access them.