Recent years have seen an explosive growth of digitized historical material. Innovations in natural language processing have extended the possibilities for historians to extract information from large corpora of digitized texts. In debates on big data, it is often claimed that by ‘distant’ reading huge amounts of data, textual patterns will emerge, that will allow us to answer historical questions in a radically new way. However, in order to better understand patterns in historical data we need to rely on modelling techniques that make statistical inferences and estimations. The use of modelling can help historians to answer their questions in reproducible and transparent manners. The current situation is strikingly similar to the situation that economic historians were in during the 1950s and 1960s. In this period, a growing body of mathematical formalizations of economic theories combined with increasing computing power pushed the field of cliometrics: a quantitative approach to economic history. My focus will be on how we can extract patterns of cultural expression from digitised newspapers or, put differently, how can we separate signal from noise in large collections of text. M. Wevers argues that modelling can help researchers to better understand historical dynamics.
Dr. Melvin Wevers is a researcher in the Digital Humanities Lab at the KNAW Humanities Cluster. His research interests include the study of cultural-historical phenomena using computational means with a specific interest in the formation and evolution of ideas and concepts in public discourse. Recent projects focus on the representation of gender in historical advertisements and the modeling of social mobility within the Dutch East India Company.
Wednesday, 27 November 2019
16.00 - 18.00
followed by a drinks reception
Maison des Sciences humaines - C²DH - 4th floor - Open Space
11, Porte des Sciences
With the kind support of