The increasing ubiquity of machine learning algorithms in our everyday lives has prompted much critical debate over their ethical and epistemological implications. Much of this debate has focused on the kinds of social biases they encode and the need for humans to correct or intervene in their automated decisions. Humanistic disciplines naturally have much to offer these debates, with their long tradition of attending to the partiality of any claims to generalizable truth. But this critical stance can at times hinder recognition of machine learning as itself mired in a long history of debate over how to reason about error, bias, and the uncertainty of all knowledge. What might humanistic inquiry gain not by thinking around these ideas, as if they were problems belonging to the machine or a naïvely positivist worldview, but instead with them, as both historical and philosophical counterpoint to its own established theories?
In this talk I draw on several case studies from my research to reflect on the usefulness of reasoning with these ideas as they manifest in machine learning and statistical methods more generally. This includes projects related to the classification of poetic genres, recovering the semantics of racial bias in Japanese prose fiction, and analyzing large-scale bibliographic data on literary translations. In each case I suggest how a deliberate confrontation with statistical error, bias, and uncertainty can open up new sites of interpretation within literary study while also encouraging mutual recognition of the knowledge gaps that characterize qualitative and quantitative methods alike.
Dr. Hoyt Long is Associate Professor of Japanese Literature, and East Asian Languages and Civilizations at the University of Chicago.
Wednesday, 26 January 2022
14.00 - 15.00
Online - Webex
If you want to participate, please send an e-mail to firstname.lastname@example.org to receive the link.