Since the 2016 election, “fake news” has emerged as a major concern for technology platforms, political activists, and journalists. The prevalence of hoaxes, disinformation, bots, and sensational false content on social media has given rise to a plethora of concerns involving the spread of damaging conspiracy theories, the ability of citizens to access accurate political information, and the manipulation of mainstream media by extremist groups and ideologues. However, the term “fake news” has been heavily politicized, used by partisan actors to refer to sources they disagree with, or to call into question the credibility of particular outlets. It is an umbrella term that encompasses a wide array of wildly variant problematic information, and is often used to criticize a variety of practices related to the shift from broadcast to social news consumption, such as clickbait headlines, personalized news, and algorithmic visibility.
To researchers, the current hubbub over “fake news” brings up a set of questions. What can we learn from media and communications histories to inform this moment? How do we examine problematic information as part of an overall media and technological landscape? How can academics and researchers help frame the problem in ways that will lead to effective solutions? This panel showcases empirical scholarship on problematic information, using qualitative, historical, and ethnographic methods to investigate the history, present, and future of so-called “fake news”—calling into question some of the assumptions made in both popular and scholarly discourse.
Specifically, this panel focuses on how institutions construct and contribute to the spread of fake news (paper 1 and paper 2) and how people make meaning of “fake news” in partisan environments (paper 3 and paper 4). Each paper takes up empirical evidence to investigate popular claims about online disinformation, draws upon cutting-edge, interdisciplinary scholarship, and takes a sociotechnical examination of the current information landscape.
The first paper uses a historical approach to criticize the current push for media literacy interventions, drawing from a case study of “economic education” campaigns during the 20th century. The author argues that corporations and corporate-sponsored philanthropic organizations defined the bounds of illegitimate and legitimate media to favor their economic interests and push favorable ideological frameworks. In the current moment, when technology platforms like Google and Facebook are coming under criticism for their role in facilitating the spread of problematic information, focusing on “media literacy” as an individual effort may prevent the implementation of more structural solutions.
The second paper examines the use of bots by both journalists and partisan political actors in the run-up to the 2016 election. Drawing from interviews and participant observation with journalists and technologists, the author frames the journalism bot as an “information radiator” – a “communication prosthesis” for journalists often too busy to parse data or observe Twitter manually. The political bot, on the other hand, was used to push problematic information into mainstream discourse via social media. Both types of bots were instrumental in furthering both “junk news” during the election cycle, and the very concept of “fake news” so popular in the current moment. Thus, understanding how bots are leveraged for multiple purposes by different actors is crucial to understand the underpinning technologies of the disinformation debate.
The third paper examines how mainstream conservatives make sense of and frame an array of partisan, mainstream, and “junk” news. Based on ethnographic interviews and participant observation in Conservative communities, the author argues that Conservatives, far from being cultural dupes, use close reading and research to evaluate and analyze a variety of news and information sources. These processes reinforce the idea that the mainstream media cannot be trusted, further pushing them to partisan news sources that often integrate extreme far-right beliefs and conspiracy theories.
Finally, the fourth paper analyzes theories of media effects of fake news, based on an in-depth reading of current scholarship on “fake news” and partisan media consumption. The author argues that popular discourse frames “fake news” in ways similar to the “magic bullet” theory of media effects popular during the 20th century, but should actually be understood as a process of active audience engagement. Research that suggests “fake news” is more prominent in conservative communities is explained through an analysis of both the deep stories and affect of partisan media. The paper argues that given the shift to social platforms, sharing political information functions online as an identity-signaling mechanism.
Taken together, the four papers suggest that solutions to the “fake news” problem must take into account a variety of actors, technologies, and belief systems involved. Rather than simply describing or critiquing the current moment, this panel hopes to offer some guidance to technologists, policymakers, journalists and activists who wish to curb the spread of disinformation online.