When Social Media Companies, Research Ethics, and Human Rights Collide
Tech companies used to be in the business of selling software or hardware to individual users. Computer science long fed those companies with state-of-the-art engineering know-how that pushed benchmarks on speed and capacity. Now the wealthiest tech companies build social worlds comprised of human interaction, happening a billion times a second, whether it’s the mother of all search engines, a global social network, or live-streaming gaming and entertainment platforms. The shift to building and selling social worlds moved tech and computer science into uncharted territory. These systems convene people, as much as they mine their data, in real-time.
The humanities and social sciences used to be the ones traditionally tasked with making sense of society and the human condition. Now anthropologists, historians, economists, political scientists, psychologists, and sociologists must observe and engage people where they socialize most: online. Unlike the public archives, classrooms, street corners, and public parks that once informed our understanding of social life, researchers cannot access the bulk of people’s daily social interactions. These data are ensconced behind the firewalls of commercial companies, piled in terrabytes most social scientists haven’t been trained to analyze.
Through a review of basic tenets of human subjects research and lessons learned from past research blunders that compromised the public’s trust in science, this talk offers a new “human data research” paradigm for training the next generation of engineers and social researchers studying and building technology’s next wave of social worlds. The talk argues that most “ethical dilemmas” arise not because maniacal actors intentionally do the wrong thing but because methods of investigation and innovation are pushed to capacity and failing us. The path forward will not be listing an abstract set of principles, pontification, or finger-pointing but hammering out a new, shared course of action that considers: How do to respect the rights and freedoms of individuals and society when we interact with online environments that are at once familiar software, like a spreadsheet, controlled settings, like a lab, and deeply social and dynamic, like a backyard BBQ? This essay makes the case that it is our collective job to earn and maintain the Public’s trust through mapping out a new set of actions so that future social research and technology builders have a fighting chance to learn and create more down the line.
Mary L. Gray is a Fellow at Harvard University’s Berkman Klein Center for Internet and Society and Senior Researcher at Microsoft Research. She chairs the Microsoft Research Lab Ethics Advisory Board. Mary maintains a faculty position in the School of Informatics, Computing, and Engineering with affiliations in Anthropology, Gender Studies and the Media School, at Indiana University. Mary’s research studies how technology access, social conditions, and everyday uses of media transform people’s lives. Another thread of Mary’s work examines how ethics and research compliance processes produce norms of vulnerability and risk in human subjects research, particularly studies at the intersections of computer and social science. Her most recent book, Out in the Country: Youth, Media, and Queer Visibility in Rural America, looked at how young people in the rural United States use media to negotiate their sexual and gender identities, local belonging, and connections to broader, imagined queer communities. Mary’s forthcoming book Ghost Work (Houghton Mifflin Harcourt 2019), a collaboration with computer scientist Siddharth Suri, combines ethnography, interviews, and survey data with large-scale platform transaction data to explore the impact of automation on the future of work through workers’ present-day experiences of on-demand economies. Mary’s research has been covered in the popular press, including The New York Times, Los Angeles Times, and the Guardian. She served on the American Anthropological Association’s Executive Board and chaired its 113th Annual Meeting. She currently sits on the Executive Board of Public Responsibility in Medicine and Research (PRIM&R) and Stanford University’s “One-Hundred-Year Study on Artificial Intelligence” (AI100), looking at the future of AI and its policy implications.
A book signing will immediately follow her talk.
**This talk does qualify for 1 hour of advanced responsible conduct of research (RCR) training.
To register, please go to: http://www.clemson.edu/orc/
Friday, September 14 at 9:30am
Watt Innovation Center, Auditorium