Fake news isnt just bad news: Its bad for the bottom line, too
USC Marshall research also suggests that Facebook users who help expose falsehoods should be compensated
Note to Mark Zuckerberg: Beware of misinformation.
Research by the USC Marshall School of Business makes a case that misinformation is a business risk for social media platforms, and proposes informational methods to alleviate the phenomenon of fake news.
A recent paper suggests that although viral content is good for revenue (via increased viewership and the attendant increase in advertising), it poses danger to the ultimate bottom line. It was written by Kimon Drakopoulos, assistant professor of data sciences and operations at USC Marshall, and Ozan Candogan, assistant professor in operations management at the University of Chicagos Booth School of Business.
Our models show that engagement levels fall when users arent warned of posts that contain misinformation.
Kimon Drakopoulos
Our models show that engagement levels fall when users arent warned of posts that contain misinformation, Drakopoulos said. And they dont just fall; they fall to levels lower than when users are warned.
The researchers saw clicks fall by more than half when platforms did not have a fake news warning. Failing to intervene can lead to an even greater drop in engagement, Drakopoulos said. Once users realize theyre getting fake news, whether they learn that from an external source or some other means, they lose trust in the site that conveyed it.
Facebook has started placing an icon for more information on shared posts, which takes a user to the news items original site, but leaves it to the user to decide whether its a credible news source.
Why fake news spreads engagement and misinformation
Drakopoulos wants to use his findings to understand how to optimize fake news warnings.
Look at how fake news progresses, he said. It looks the same as a contagion, but its different. We want to know why fake news spreads and how we can prevent it.
We are at the frontier of what can be achieved via different mechanisms of engagement and misinformation.
A key insight: Leverage network structure to circumvent the spread of fake news.
Networks have multiplicative effects, Drakopoulos said. Why not use a network intervention to exploit them?
Facebook incentives
Moving forward, platforms must focus on the incentive issues in the creation and monitoring of content, the researchers suggest. For example, Facebook could provide incentives, in the form of reputation scores, monetary incentives or other privileges, to users who turn in purveyors of fake news.
That might make people think twice, Drakopoulos said. Engagement might go down, but quality will go up leading to a long-term healthy engagement recovery.
The next step in his research agenda is to work with Gad Allon, Jeffrey A. Keswin Professor of Operations, Information and Decisions at the Wharton School, and Vahideh Manshadi at the Yale SOM to develop a behavioral experiment that can demonstrate how users actually consume and internalize information, and which aspects of this behavior lead to the current phenomena of political polarization and incomplete learning. The researchers expect to finish the experiment by the end of the year.
Too much information
If the theoretical findings that initiated the project are true, Drakopoulos said, too much information, surprisingly, leads to incomplete learning. The troublesome phenomena we see is the result of the abundance of information on social media.
The project, partially funded by USC Marshalls Institute for Outlier Research, will build on previous work Drakopoulos has done on the economic considerations of contagion intervention and social policy.
Decisions are ultimately economic, he said.