Loading…
#AoIR2018 has ended
Friday, October 12 • 11:00am - 12:30pm
Agents, Actants and AI

Sign up or log in to save this to your schedule and see who's attending!

WHO’S WHO IN SMART REPLY? HOW INSTITUTIONS FRAME THE RELATIONSHIP BETWEEN HUMANS AND AI IN IMPERSONAL INTERPERSONAL COMMUNICATION
Nathaniel Poor, Roei Davidson
Google, Facebook, and Linkedin have recently integrated artificial intelligence-driven recommendation systems into their widely used communication services. These systems suggest replies to users which they can send when communicating with others. For example, Google provides “Smart Reply” in its Gmail mobile applications. Senders can click on a smart reply, modify it if they wish, or send it as is, as if they typed it themselves, and receivers may be none the wiser. Such technologies recast the first part of Lasswell’s (1948) model of communication, $2 , making interpersonal communication impersonal. A similar feature is also embedded in Facebook’s Messenger application as well as in LinkedIn.

For this work, we use Critical Discourse Analysis to examine how the institutional creators and their surrounding intermediaries (PR professionals and technology journalists) discuss Smart Reply and similar technologies, not only looking for what people mention but what is absent. To aid in considering how these technologies are framed we draw on work that considers the relationship between humans and computers, and on recent science fiction. While institutional and journalistic discourses focus on what is present, these approaches allow us to consider socially-relevant absences related to the consequences such recommendation technologies might have for human autonomy, well-being and deliberation.

HEY ALEXA, WHO ARE YOU?! THE CULTURAL BIOGRAPHY OF ARTIFICIAL AGENTS
Bart Simon, Ceyda Yolgörmez
This paper considers the agency question in the practical engagement of consumer artificial agents like Alexa, Siri, Google Home and others. Drawing on literature in the cultural studies of robotics and artificial intelligence, STS and interactionist sociology we argue that the agency and attendant human-likeness of increasingly sophisticated artificial agents is less of an existential question and more a matter of practical attribution by human interlocutors. Our guiding question then is, how do artificial agents’ interlocutors assess and attribute agency and what are the conditions for differential attributions?

LOOK WHO’S TALKING: USING HUMAN CODING TO ESTABLISH A MACHINE LEARNING APPROACH TO TWITTER EDUCATION CHATS
K. Bret Staudt Willet, Brooks D. Willet
Twitter has become a hub for many different types of educational conversations, denoted by hashtags and organized by a variety of affinities. Researchers have described these educational conversations on Twitter as sites for teacher professional development. Here, we studied #Edchat—one of the oldest and busiest Twitter educational hashtags—to examine the content of contributions for evidence of professional purposes. We collected tweets containing the text “#edchat” from October 1, 2017 to June 5, 2018, resulting in a dataset of 1,228,506 unique tweets from 196,263 different contributors. Through initial human-coded content analysis, we sorted a stratified random sample of 1,000 tweets into four inductive categories: tweets demonstrating evidence of different professional purposes related to (a) self, (b) others, (c) mutual engagement, and (d) everything else. We found 65% of the tweets in our #Edchat sample demonstrated purposes related to others, 25% demonstrated purposes related to self, and 4% of tweets demonstrated purposes related to mutual engagement. Our initial method was too time intensive—it would be untenable to collect tweets from 339 known Twitter education hashtags and conduct human-coded content analysis of each. Therefore, we are developing a scalable machine-learning model—a multiclass logistic regression classifier using an input matrix of features such as tweet types, keywords, sentiment, word count, hashtags, hyperlinks, and tweet metadata. The anticipated product of this research—a successful, generalizable machine learning model—would help educators and researchers quickly evaluate Twitter educational hashtags to determine where they might want to engage.

WHEN YOU CAN TRUST NOBODY, TRUST THE SMART MACHINE
Sun-ha Hong
The diffusion of smart machines for tracking individual bodies and homes raise new questions about what counts as self-knowledge, how human sense experience should be interpreted, and how data is to be trusted (or not). Self-tracking practices intersect the contemporary faith in the objectivity of data with the turn towards what has been called ‘i-pistemology’: a revalorisation of personal and experience-based truth in opposition to top-down and expert authority. What does it mean to ‘know myself’, insofar as this knowing is performed through machines that operate beyond the limits of the human senses? What does it mean to turn to personalised and individuated forms of datafication amidst a wider crisis of consensus, expertise, and shared horizons of reality?

This analysis draws on a larger research project into datafication and knowledge, conducted between 2014 and 2017. It included analysis of news media coverage on self-tracking technologies; of self-tracking products and prototypes, including the promotional discourse and the design of individual devices; and interviews and participation observations of the Quantified Self community. The presentation will explore how these technologies connect the faith in data-driven objectivity with a contrarian and individualistic form of ‘personalised’ knowledge, remixing wider themes of trust, expertise and verification.


Friday October 12, 2018 11:00am - 12:30pm
Sheraton - Ballroom East

Attendees (34)