After Cambridge Analytica, Do You Trust Facebook to Police the News?

Published April 4th, 2018 - 12:46 GMT
Do you trust Facebook? /Al Bawaba
Do you trust Facebook? /Al Bawaba

By Eleanor Beevor

  • Facebook is desperate to regain public trust
  • The social media website is also trying to identify and take down fake accounts
  • Misinformation can have extremely dangerous consequences
  • RT is a prime example of the ethical minefield that the fact-checkers will face



In the midst of a catastrophic fall in their share price after a series of scandals, Facebook is desperate to regain public trust. Not only is it trying to assure users that they have seen the last of companies such as Cambridge Analytica being able to lift vast quantities of their personal data. It is also trying to combat scares around misinformation spread by the network with a new fact-checking initiative. At the end of March, Facebook announced that they would be rolling out a trial, which will expand their partnerships with third-party fact-checkers, and increasing those checkers’ remit to include photos and videos as well as news articles.


Fact-checking process begins

This process would begin in France, in partnership with the AFP news agency, and is soon to be scaled up into other countries. According to a company blog post announcing the move, Facebook already has “fact-checking partners” in six countries, and recently rolled out new initiatives in Mexico and Italy. Whilst it is not entirely clear what will happen to content that is found to be false, it is presumably along the lines of existing systems, which purport to be a combination of flagging the article as untrustworthy, and attempting to provide the user with more context. Facebook is also trying to identify and take down fake accounts, or accounts with artificially inflated followings, through “advances in machine learning”.

This latest phase, starting in France, is the first to include the fact-checking of photos and videos, prompted by a number of controversial fakes that have caused online uproar. One recent example included a viral photo of the Parkland shooting survivor Emma Gonzalez, doctored to show her ripping up the United States Constitution.

The risks these hoaxes present also extend to elections. The US mid-term elections are approaching, and Facebook has yet to recover from the hit it has taken from the investigations into Russian interference in the 2016 elections. Now they desperately need to convince the public, and the government, that the company has a solid strategy to cope with false news.


Misinformation and false claims

Misinformation can have extremely dangerous consequences, and there is a prescient need to challenge false claims. But many of Facebook’s past attempts to selectively publicise news on grounds of credibility have not gone well. For instance, an attempt to “flag” articles seen as untrustworthy actually resulted in some people’s faith in those stories increasing. Later, when Facebook created the “trending” bar through which news was chosen for broadcast by company operatives, the company came under fire after Gizmodo revealed those operatives deliberately omitted news of interest to conservative users.

In that light, outsourcing the responsibility for flagging false content to third-party fact-checkers might seem like a good idea, or even the least-worst option in a difficult set of circumstances. Media outlets are able to act as fact-checkers for Facebook if they go through an accreditation process. The network has tried to enlist conservative as well as liberal publications to counter allegations of bias, so long as they follow a fact-checking code of conduct. Specialist fact-checking organisations have also signed up as partners. 


However, the process remains fraught by a serious lack of transparency. Rand Waltzman, a Senior Information Scientist at the RAND Corporation, told Al Bawaba:

There is, in fact, no way to tell how successful Facebook has been monitoring and acting on what you called false reports. The reason is simply that all of Facebook’s actions are completely hidden.  While they make various claims, they do not provide any means to verify them. They are not willing to discuss in any detail whatever techniques they do or do not use and how they do or do not apply them to anybody outside of Facebook.”

It is not yet clear whether scaling up the third-party fact-check system will improve the transparency around why some information is flagged as false. Yet even if it were to do so, fact-checkers themselves do not have a neutral image in certain ideological circles. This problem will be exacerbated if the bulk of the work is left to large-scale media outlets.

For instance, when the Kremlin-funded news outlet RT (previously Russia Today) discussed Facebook fact-checkers, it published a piece stating: “If an article is factually correct, but contains the kind of truth Facebook doesn’t want you to see, don’t expect to see it in your news feed any time soon.




RT is a prime example of the ethical minefield that the fact-checkers will face. RT’s content has been described by the media researcher Emily Bell as “a mix of fact-based stories, strong opinion pieces and montages supporting Vladimir Putin”, but a propaganda outlet in character. It thus falls into an awkward place for fact-checkers. One need not constantly invent falsehoods to create a selective and ideologically driven representation of some of the facts. All media outlets do this to some extent, but fact-checkers will have to ask themselves where the line must be drawn.

And this brings us to another critical problem with Facebook’s supposed solution to fake news. The very notion of fact-checking is premised on a political setting in which the checkers are free to operate in the service of truth rather than government. And unfortunately, in a setting where governments are prepared to clamp down on any information that threatens their narrative, truth will not save that content. Myanmar is a prescient example. Whilst numerous activists documenting human rights abuses against the Rohingya Muslim population have had their Facebook posts removed, hate speech against the Rohingya has been allowed to proliferate on the platform, despite generalized Facebook guidelines preventing it.


Paying For the Product

Rand Waltzman added: “There is an old saying: “If you are not paying for the product, then you are the product.” People should expect no less. Facebook is not a public interest organization. It is a business. As a business, it is required to obey the laws of any country where it operates.  If it wants to operate in a country where free speech is limited then it is required to censor content according to the laws of that country. If they don’t, they will be punished. If they don’t agree with the law, they should not be operating in such places.”

Facebook and other social media sites have exacerbated a mess of misinformation. Even with external help, the clean-up operation represents a minefield of difficult choices. But the question is whether they, or the governments above them, should bear the responsibility for that operation. And if not they, it may be up to its users (or rather its “products”) to either expect less from Facebook, or even to walk away.

© 2000 - 2021 Al Bawaba (

You may also like