#AoIR2018 has ended
Back To Schedule
Thursday, October 11 • 11:00am - 12:30pm
Trolls, Leakers, Vigilantes

Sign up or log in to save this to your schedule, view media, leave feedback and see who's attending!

Luke Heemsbergen
This paper is concerned with mapping the socio-material ecosystems of online leaks projects that followed WikiLeaks. The period of initial popularisation/infamy of WikiLeaks (2006-2015) correlates with an emergence of over 90 less-known radical online disclosure projects designed to "Kill Secrets" (Greenberg 2012). We offer the first systematic study of the decentralised and widely disparate ecosystem of leaks projects to build a taxonomy of leaks sites (n:94) from various observable socio-technical vectors. Affordances tied to user and technical practice, vectors such as self-identified thematic focus (issue, region, etc.), and measures of publication efficacy for each site are all open coded to discern patterns and clusters of practice.

Analysis then shifts to mapping of visible interrelationships between sites via social network analysis (SNA) for further insight to the ecology of leaks sites. Taxonomy over typology signals observing material practice without predetermined ideal type, and normative links to agonistic democratic theory. At a macro level our findings suggest, an ecology of leaks sites blossomed and died, with only a handful of sites remaining online, or having ever actually functioned. Micro to Meso analysis of practices show how leaks sites' socio-technical materiality helps shape both efficacy and normative goals, from which unique and sometimes agonistic normative governmental functions can be inferred. Discussion of findings then critically assess how digital leaks served (and severed) ties to already problematic equations of 'transparency' and democracy from a frame of agonistic and algorithmic government practices (Heemsbergen, 2016; Ananny and Crawford, 2016) and suggest tentative paths forward.

Daniel Trottier
Individuals rely on digital media to denounce and shame other individuals. This may be based on perceived offences, while often reproducing categorical forms of discrimination. Both offence taking and its response are expressed online by gathering and distributing information about targeted individuals. By seeking their own form of criminal justice, participants challenge the monopolisation of violence by state. Yet digital vigilantism includes shaming and other forms of cultural violence that are not as clearly monopolised, or even regulated. Indeed, they may feed from state or press-led initiatives to shame targets, or simply to gather information about them. Digital vigilantism remains a contested practice: Terms of appropriate use are unclear, and public discourse may vary based on the severity of the offence, severity of response, as well as based on identities and affiliations of participants. Moreover, it overlaps conceptually with other phenomena, including online harassment and doxing. While these can be understood as distinct practices, they also comprise an arsenal of options for civic actors to utilise. This paper advances and seeks to implement a conceptually informed model of digital vigilantism, in recognition of its coordinated, moral and communicative components. Drawing upon preliminary case studies in countries including Russia, China, the United States, the United Kingdom and the Netherlands, as well as relevant literature on embodied vigilantism and concurrent forms of online coordination and harassment, it considers a range of recent cases in a global context in order to direct subsequent empirical analysis of how digital vigilantism is rendered meaningful.

Julia Rose DeCook
Reddit has been a hub for extremist movements for some time, particularly in the development and spread of Men's Rights activism that trickles over into alt-right ideologies. This study examined, through critical discourse analysis, the role of the subreddit sidebar in the enactment of a digital doxa where socialization processes and indoctrination into a group occurs, and how the sidebar materials are a form of knowledge curation. Looking specifically at the men's rights affiliated movement r/TheRedPill, this study proposes that the sidebar is a spatiotemporally suspended space that enacts a new reality for its adherents. The sidebar functions not only as a space where rules of the community are learned but also as a user-generated archive where the moderators curate certain readings that are representative of the group's shared ideology and also for the curation of a collective narrative. This collective narrative then grants members a shared collective history, and the moderators serve as the gatekeepers to the doxa and archive. The materials themselves on the sidebar are not merely a collection of readings and rules, but rather are representative of a knowledge production process unique to the individual subreddits, and is omnipresent throughout the subreddit's community. The sidebar in subreddits like r/TheRedPill is more than a space where community guidelines are posted, but also where members become a part of the group, and where a past, present, and futurity are enacted; giving the group a common foundation to build upon and fueling their social movement.

Emma von Essen, Joakim Jansson
In this paper, we predict hateful content as well as quantify the causal link between anonymity and hateful content in political discussions online. First, we make use of a supervised machine-learning model to find a prediction model of cyberhate in political discussions on a dominating Swedish Internet forum, Flashback. Second, we investigate how changes in anonymity affect the writing of hateful content.

We scrape text from the political discussions on Flashback and let a research assistant manually classify each post from a random subset of the threads by whether it contained, e.g. hateful writings, aggressive writings as well as towards whom the hate is directed. We use the classified data to find a prediction model in the full set of threads. We then use the predictions of hate to estimate the effect of changes in anonymity on cyberhate. An event suddenly changed the anonymity at the discussion forum. The event affected only a certain type of user, creating a quasi-experiment, with early-registered users as a treatment group and late-registered users as a control group.

We find a prediction model of hateful content. Using these predictions in the quasi-experimental estimation, we find that early users of the forum decreased their share of hateful content more than later registered users did after the event when there was a threat of less anonymity. We also show that this behavioural change is a combination of individuals’ changing how they express themselves and that they reduce their writing or stop entirely.

Thursday October 11, 2018 11:00am - 12:30pm EDT
Sheraton - Ballroom East