Facebook's "trust" rankings seek to hurt Fake News. It could end up helping it.

Image for post
Image for post

In its second major update of 2018, Facebook announced that it will rely on user surveys to rate news sources as “broadly trusted,” in an effort to preserve objectivity, and avoid the role of arbitrating what news and info people can see.

In a post last Friday, Facebook CEO Zuckerberg wrote about this major announcement. He acknowledged the power social media has in spreading information, but lamented that there is “too much sensationalism, misinformation and polarization” in the world today. In order to democratize the process, these surveys will seek to find what a representative sample of users rate as trustworthy, informative, and relevant to their local community.

Facebook took sharp criticism on its role in allowing fake news and hate speech to spread on its platform. Zuckerberg has rejected the idea that the fake news shared on Facebook affected the 2016 election. But the company is hesitant to fully accept that though its form and ideals may be “platform only,” it functions as the word’s biggest international media company. Therefore it is burdened with the type of responsibility that power and influence demands.

Though Facebook’s intent behind this move is positive, it’s unclear how it would decrease the amount of fake news the platform hosts. But even more so, this could have the exact opposite effect it aims for.

Ranking trust has some potential to combat divisiveness, but what happens when a critical mass of users decides that the partisan coverage they consume is trustworthy, not because it is based on facts, but because it reflects their world-view?

Using surveys on trust-worthiness would give users the opportunity to bolster the rating of content sources that reflect their own political biases, and to diminish sources that don’t. It would also give an advantage to those with the technical resources and capital to organized liked-minded users in order to game these surveys, while offering nothing to those who make their content trustworthy in the truer spirit of the word (i.e. filters out bias and aims for empiricism).

For example, if I was a member of a group of users who feel that social justice warriors (SJWs) are ruining video games, I’d now have both incentive and an extra tool to make sure I can rate publications that agree with me as trust-worthy. I’d also have an easier was to galvanize and coordinate others to rate sites such as Feminist Frequency as untrustworthy. In this example, whether no matter what side you lean towards, there are are multiples “sides” of this debate. But what if I’m a foreign country trying to sway the outcome of U.S. elections? Through the creation of fake or duplicate accounts, and computational propaganda, this gives those invested in spreading propaganda a new way to do it.

The fake news phenomenon has many complex facets, but it is largely an issue of user-engagement. Hyper-partisan audiences already exist, so this system of trustworthiness would give them the opportunity to ratify their preferred sources. This can be seen as objective or techno-democratic in the puritanical sense, but can also be incredibly problematic. This new idea ostensibly gives more power of arbitration to the “like” and “share” buttons. Yet it does nothing to break the feedback loops that fake news thrives in.

There are valid concerns in balancing the free speech rights of social media users, and combating the disinformation streaming down our newsfeeds. The platform is in a paradoxical position–whatever it does or does not do, million of its users will be furious, decrying its inaction, bias, or censorship. Facebook also has over 2 billion users, so effectively screening trillions of posts is extremely difficult. But at some point, the company must come to terms with the moderating duty it has. Social media has become an integral part of a democratic society, but companies like Facebook can no longer guarantee that their platforms don’t have harmful effects.

The utopian ethos of the tech world drives innovation and creative zeal, but can cause it to grossly underestimate the dystopian impulses of our time. When Facebook was first created, its founders sought to help people interact and connect with the globe in positive ways. We can surmise they didn’t imagine Russian-linked accounts buying thousands of their ads, or posting fake Black Lives Matters meet-ups in Baltimore or dueling rallies at an Islamic center in Houston. All social media platforms need to adapt to the realities of our digital culture, and accept their responsibility in shaping those realities. Facebook just happens to be the most important of these platforms.

In reality, most tools have the capacity to do both good and harm — never just one or the other. So before introducing any new tool, Facebook needs to think very critically and carefully about the potentially negative impact it can have, regardless of positive intent. And hopefully the public can learn more about the methodology of these surveys.

Allowing users to arbitrate the trustworthiness of news gives them a tool to fight the fake news deteriorating our democratic norms. But it also gives bad and dishonest actors the chance to reify a climate of “alternative facts” and discredit sources that fact-check their misinformation.

Joshua Adams is a writer and journalist from Chicago. UVA & USC. Taught media and communication at DePaul & Salem State. Twitter: @journojoshua

Get the Medium app

A button that says 'Download on the App Store', and if clicked it will lead you to the iOS App store
A button that says 'Get it on, Google Play', and if clicked it will lead you to the Google Play store