Facebook Wants You to Think False News Efforts Are Working. Is It Really True?
Facebook has recently announced that three independent studies have been found that clearly show that the company’s efforts to fight the spread of false news on its site might be working. The three studies — conducted respectively by New York University and Stanford University researchers, the University of Michigan, and French fact-checking organization Les Décodeurs — each found that the volume of false news on Facebook has decreased. Some found that, among the false news on Facebook content present on the site, engagement with it had also gone down. Facebook Wants You to Think False News on Facebook Efforts Are Working. Is It Really True?
False News on Facebook are Eracdicated: Is It Really True?
According to the surveys conducted by multiple organizations to see if users were noticing less spam on the social network, and despite this announcement, it seems like misinformation might not be totally eradicated just yet. Fake news on Facebook still exists.
Here’s what each study found, and how it compares to what users report seeing on their News Feeds.
Three Studies of Facebook’s Fight Against False News
“Trends in the Diffusion of Misinformation on Social Media”
The first study — conducted by New York University’s Hunt Allcott, observed the amount of engagement on Facebook and Twitter with content from 570 publishers that had been labeled as “false news,” according to earlier studies and reports. And while the study cites where it obtained this list of 570 sites, it doesn’t actually indicate what they are.
The team then used content sharing and tracking platform BuzzSumo to measure how much engagement — shares, comments, and such reactions as Likes — was received by all stories published by these sites between January 2015 and July 2018 on Facebook and Twitter.
The results: Following November 2016, interactions with this content fell by over 50% on Facebook.
Since then, Facebook has widely publicized its fight against the spread of such misinformation — which includes false news — and points to this study as evidence of that fight’s success.
“Iffy Quotient: A Platform Health Metric for Misinformation”
The second study, conducted by researchers at the University of Michigan, relied a measure of false news engagement referred to as the “Iffy Quotient” — which takes into account how much content from sites known for publishing misinformation is “amplified” on social media.
The platform includes “sites that have frequently published misinformation and hoaxes in the past,” as measured by such fact-checking bodies as Media Bias/Fact Check and Open Sources.
This study largely utilized a site that measures the most popular links shared on social, as well as the engagement — again, shares, comments, and such reactions as Likes — received by each link.
The results, according to the study’s authors, aligned with those of the first study, showing “a long-term decline in Facebook’s Iffy Quotient since March 2017.”
“False Information Circulates Less and Less on Facebook”
Finally, a study conducted by Les Décodeurs, concluded that Facebook engagement with content from publishers classified as “unreliable or dubious sites” has decreased by half within France since 2015.
What Do Users Report Seeing on Facebook?
While the above three studies point to the possible success of Facebook’s efforts to curb the spread of false news on Facebook and misinformation, the group of users we surveyed might not yet be seeing the impact of Facebook’s anti-spam measures.
According to a recent survey, this question was asked from the users across the globe: In the past six months, have you noticed more or less spam on your Facebook News Feed?
Over half of respondents report seeing more spam in their News Feeds over the past six months: a figure up from the 47% who reported seeing more spam in their feeds in July 2018.
These combined findings raise a question: If independent research, which Facebook says it did not fund, points to such success in its efforts to curb the spread of false news, why does a growing number of users report seeing more of it in the News Feed?
There could be a number of explanations, one being heightened awareness. Since first discovering that it was weaponized for a coordinated misinformation campaign leading up to the 2016 U.S. presidential election, Facebook has been more forthcoming about further evidence it finds of bad actors misusing its site for similar purposes.
In fact, the Wall Street Journal has recently reported that — according to its sources — the bad actors behind a September data attack that scraped the personal information of 30 million Facebook users were “spammers that present[ed] themselves as a digital marketing company … looking to make money through deceptive advertising.”
With such stories continuing to make headlines, it could be that Facebook users are more attuned and sensitive to the possible misleading or spammy nature of the content they see in their News Feeds, causing them to report seeing more misinformation.
As we conclude this blog, it is very obvious that Facebook’s pledge to eliminate misinformation is itself fake news!