Many people who share fake news online do so because they aren’t paying close attention to what they’re sharing, according to a new study. The research found that simply prompting people to think about the accuracy of their news content helps curtail the spread of falsehoods.
“When deciding what to share on social media, people are often distracted from considering the accuracy of the content,” the authors, from the Hill/Levene Schools of Business at the University of Regina and the Sloan School of Management at the Massachusetts Institute of Technology (MIT), wrote in the new paper published in Nature.
While the spread of inaccurate or false information and conspiracy theories is nothing new — the climate denying disinformation campaign by the fossil fuel industry dates back decades — the study’s findings undercut the notion that there is a widespread desire among the public to actively spread disinformation. Rather, it adds further evidence showing how social media allows for fake news to spread rapidly — and how to slow it down.
Online disinformation seemed to hit a fever pitch in the past year, with the spread of the violent QAnon conspiracy, Covid denial, 2020 election conspiracies, and pro-insurrection voices all intermingling and cross-pollinating.
But instead of the malign actors involved in creating disinformation — such as the Koch-backed network of think tanks, charities and politicians seeking to undermine climate science, or, more recently, coordinated social media campaigns and troll farms, sometimes backed by government intelligence agencies, aimed at undermining elections around the world — the new Nature study looks at the much larger set of everyday social media users who share this type of misinformation online, often unwittingly, or at least not with malicious intent. The results offer some reasons for hope, as well as some tools to fight disinformation.
The study surveyed thousands of U.S. Twitter and Facebook users. It found that most people do not wish to spread fake news and, in fact, they rate accuracy as an important principle. When asked about what motivates sharing, participants rated accuracy higher than other factors, such as whether a piece of news was interesting, funny, or politically-aligned with their beliefs. Moreover, most people are fairly good at identifying and distinguishing accurate news from false news. In addition, most people do not share inaccurate news for hyperpartisan reasons either.
Instead, what the researchers found was that many people spread fake news without thinking too much about whether the information is accurate or not. It’s a problem of inattention, made worse by social media which pushes people to sift through news rapidly and superficially.
“This means that when thinking about the rise of misinformation online, the issue is not so much a shift in people’s attitudes about truth, but rather a more subtle shift in attention to truth,” two of the study’s authors, David Rand of MIT and Gordon Pennycook of the University of Regina, wrote in Scientific American, summarizing their findings (emphasis in original).
The researchers conducted several experiments to parse out contributing factors. In one experiment, participants were given a mix of true and false news stories, and one group of participants was asked to decide whether the headlines were accurate and another group was asked whether they would share them on social media.
Interestingly, the participants looking for accuracy did a reasonably good job identifying accurate stories from fake ones. In one experiment, participants rated true stories as accurate more often than false stories by a margin of 55 percentage points.
But the group weighing whether or not to share a story chose to share fake stories at a much higher rate compared to when they were only asked to weigh accuracy. When looking only at false headlines, 50 percent more were shared than were rated as accurate.
In other words, when asked about accuracy, people were good at spotting accurate versus fake stories. But when asked about sharing, people chose to share more stories, even fake ones. And they chose to share stories that fit their political views at a much higher rate (by 19 percentage points) than stories that went against their political beliefs.
That would seem to suggest an ideological or partisan motivation. But the authors conducted another experiment, with over 5,000 participants on Twitter who had previously shared news from Breitbart and Infowars, two sites professional fact-checkers have rated as highly untrustworthy. The authors sent a private Twitter message to the participants and asked them to judge whether or not a single non-political headline was accurate.
The researchers then monitored the participants’ subsequent sharing behavior and found a significant improvement in sharing choices; in the 24 hours after the prompt, participants shared relatively more news from reliable outlets such as CNN and relatively less from sources of inaccurate information like Infowars.
The authors surmise that simply redirecting attention towards the concept of accuracy helped cut down on sharing of false information. “[W]e find that the single accuracy message made users more discerning in their subsequent sharing decisions,” they wrote in their study. “Relative to baseline, the accuracy message increased the average quality of the news sources shared.”
The researchers replicated these experiments with Covid-19 information and found a similar dynamic.
The study shows that there is a disconnect between what people share and what they consider to be accurate, suggesting that people share content in which they themselves might not necessarily believe.
These studies help us see past the illusion that everyday citizens on the other side must be either stupid or evil- instead, we are often simply distracted from accuracy when online. Another implication of our results is that widely-RTed claims are not necessarily widely BELIEVED
— David G. Rand (@DG_Rand) March 17, 2021
Individuals scroll quickly through a social media news feed, which tends to be mixed with accurate and inaccurate information, along with emotionally engaging content. And crucially, the authors wrote, they are provided with “instantaneous and quantified social feedback on sharing.” The quest for retweets and likes, in other words, “may discourage people from reflecting on accuracy.”
Rather than a wholesale rejection of truth, people lazily pass on inaccurate information because that tends to be what is rewarded on social media.
The good news was that even small interventions — the prompt asking whether or not headlines were accurate — redirected people away from a tendency to share false information. This suggests that social media platforms could, perhaps, periodically survey people on the accuracy of selected headlines in an effort to subtlety remind users about accurate information, the authors say.
Twitter has recently been taking steps to slow the spread of misinformation. Last year, the social media platform introduced a new feature that reminds people to read an article before retweeting it, which it says has promising results. The platform also began tagging misleading tweets with disclaimers.
The new study’s authors concede that the research is limited to sharing of political news among people in the United States. They note that follow up research could examine the impact of subtle accuracy nudges when coordinated disinformation campaigns are in question, such as the climate denial or election fraud, which are backed by groups actively working to promote a falsehood.
In a recent analysis, DeSmog found that dozens of prominent climate deniers supported the January 6 insurrection in Washington D.C. They spread debunked claims about election fraud and in some cases supported political violence. This is the type of campaign that was then likely shared by many more people who, as the Nature study illustrates, may have shared the content without taking time to think about its accuracy.
Experts have identified tools and methods for protection against malicious disinformation campaigns, such as “prebunking,” which involves learning about the tactics and tricks of bad actors before you are exposed to them. However, such campaigns of weaponized disinformation are potentially more challenging to combat when compared to one-off fake news stories.