What if the scourge of false news on the Internet is not the result of Russian operatives or partisan zealots or computer-controlled bots? What if the main problem is us?
People are the principal culprits, according to a new study examining the flow of stories on Twitter. And people, the study’s authors also say, prefer false news.
As a result, false news travels faster, farther and deeper through the social network than true news.
The researchers, from the Massachusetts Institute of Technology, found that those patterns applied to every subject they studied, not only politics and urban legends, but also business, science and technology.
False claims were 70 percent more likely than the truth to be shared on Twitter. True stories were rarely retweeted by more than 1,000 people, but the top 1 percent of false stories were routinely shared by 1,000 to 100,000 people. And it took true stories about six times as long as false ones to reach 1,500 people.
Bots can accelerate the spread of false stories. But the MIT researchers, using software to identify and weed out bots, found that with or without the bots, the results were essentially the same.
“It’s sort of disheartening at first to realize how much we humans are responsible,” said Sinan Aral, a professor at the MIT Sloan School of Management and an author of the study. “It’s not really the robots that are to blame.”
Here are other findings:
Twitter history: The research, published Thursday in Science magazine, examined true and false news stories posted on Twitter from the social network’s founding in 2006 through 2017. The study’s authors tracked 126,000 stories tweeted by roughly 3 million people more than 4.5 million times. “News” and “stories” were defined broadly — as claims of fact — regardless of the source. And the study explicitly avoided the term “fake news,” which, the authors write, has become “irredeemably polarized in our current political and media climate.”
The stories were classified as true or false, using information from six independent fact-checking organizations including Snopes, PolitiFact and FactCheck.org. To ensure that their analysis held up in general — not just on claims that drew the attention of fact-checking groups — the researchers enlisted students to annotate as true or false more than 13,000 other stories that circulated on Twitter. Again, a tilt toward falsehood was clear.
The way information flows online — and, occasionally, spreads rapidly like a virus — has been studied for decades. There have also been smaller studies examining how true and false news and rumors propagate across social networks. But experts in network analysis said the MIT study was larger in scale and well designed.
“The comprehensiveness is important here, spanning the entire history of Twitter,” said Jon Kleinberg, a computer scientist at Cornell University. “And this study shines a spotlight on the open question of the success of false information online.”
Novelty wins: The MIT researchers pointed to factors that contribute to the appeal of false news. Applying standard text-analysis tools, they found that false claims were significantly more novel than true ones — maybe not a surprise, since falsehoods are made up.
The study’s authors also explored the emotions evoked by false and true stories. The goal, said Soroush Vosoughi, a postdoctoral researcher at the MIT Media Lab and the study’s lead author, was to find clues about what is “in the nature of humans that makes them like to share false news.”
The study analyzed the sentiment expressed by users in replies to claims posted on Twitter. As a measurement tool, the researchers used a system created by Canada’s National Research Council that associates English words with eight emotions. False claims elicited replies expressing greater surprise and disgust. True news inspired more anticipation, sadness and joy, depending on the nature of the stories.
One true, one false: The researchers provided an example of two business stories, and how much more time it took the true one to reach 200 retweets. The example also shows the judgment calls made by fact-checking organizations.
•In 2014, fashion chain Zara introduced children’s pajamas with horizontal stripes and a gold star. The company said the design was inspired by what a cowboy sheriff would wear. But Twitter users posted messages saying the pajamas resembled Nazi concentration camp uniforms. Snopes: True. Time to reach 200 retweets: 7.3 hours.
•In 2016, a website republished a portion of a satirical article about how the Chick-fil-A restaurant chain had decided to begin a “We don’t like blacks either” marketing campaign to stir up controversy and boost sales. It came after the company’s president did say he opposed same-sex marriage. Snopes: False. Time to 200 retweets: 4.2 hours.
What can be done? The MIT researchers said that understanding how false news spreads is a first step toward curbing it. They concluded that human behavior plays a large role in explaining the phenomenon and mention possible interventions, such as better labeling, to alter behavior.
For all the concern about false news, there is little certainty about its influence on people’s beliefs and actions. A recent study of the browsing histories of thousands of U.S. adults in the months before the 2016 election found that false news accounted for only a small portion of the total news people consumed. “We have to be very careful about making the inference that fake news has a big impact,” said Duncan Watts, a principal researcher at Microsoft Research.
Another author of the MIT study, Deb Roy, former chief media scientist at Twitter, is engaged in a project to improve the health of the information ecosystem. In fall 2016, Roy, an associate professor at the MIT Media Lab, became a founder and the chairman of Cortico, a nonprofit that is developing tools to measure public conversations online to gauge attributes such as shared attention, variety of opinion and receptivity. The idea is that improving the ability to measure such attributes would lead to better decision-making that would counteract misinformation.
Roy acknowledged the challenge in trying to not only alter individual behavior but also in enlisting the support of big Internet services like Facebook, Google, YouTube and Twitter, and media companies.
“Polarization,” he said, “has turned out to be a great business model.”
Steve Lohr is a New York Times writer.