Histoire numérique et l’historiographie

Impressions from DH2017 - Different Facets of Access in Digital Humanities

DH2017 Montreal
The annual conference of the Alliance of Digital Humanities Organizations (ADHO) took place on 8-11 August 2017 in Montréal (Canada) on the campus of McGill University, and was co-organised by McGill University and the Université de Montréal. This blog entry will draw a few impressions from the attended presentations.

For the first time officially bilingual, in French and English, the DH2017 conference spotlighted the theme of “Access/Accès” approached by a variety of long and short paper presentations, posters, panels and virtual short paper sessions, pre-conference workshops and tutorials, as well as the opening and closing keynotes.

Since depicting the overall scene of the conference is not possible for a participant having to choose from a rich palette of parallel sessions, this blog entry will draw a few impressions from the attended presentations deemed through five facets: Reception, Collaboration and infrastructure, Methods, Tools, Models and concepts.

 

Reception

The opening plenary lecture, The unexpected reader, was presented by Marin Dacos, director of the Center for open electronic publishing (Cléo) (France) and focused on Open Access from the perspective of its reception and usages. In particular, the keynote pointed to the so-called “unexpected reader”, a reader who is not the intended or main target of a given publication, whose access to that publication is rather ephemeral, accidental, determined by a particular event, and probably more of an amateur than professional nature, and that may be uncovered by dedicated tools as Umberto, the unexpected reader detector. Beyond the methodological appeal of the model, the unexpected reader approach draws attention to the necessity of making a broader range of readers aware of Open Access by means of alternative systems or alliances between research infrastructures and other forms of media with larger public coverage, and even by providing workshops for “less academic” writing.

Aiming to capture reading behaviour related to space in an American city, in their paper Real and Imagined Geography at City-Scale: Sentiment Analysis of Chicago’s “One Book” Program, Ana Lucic and John Shanahan from DePaul University, USA, combined library circulation data comparative analysis for a selection of books from a promotional city program with sentiment analysis for Chicago-related places in the selected texts. The question was whether there is a correlation between the number of books check-outs and the way they feature places closer to the reader’s neighbouring library branch. The study enquired if this type of analysis, implying access to various types of data sources, such as US census demographics, library archives, social media (twitter feeds, Goodreads), and to the linguistic registers places in a city are written about, may help in predicting the library circulation of future titles in the program, the influence of the library promotional events, the impact of social media pointers, and in detecting ties between literary forms and real geography.

My presentation, co-authored with Catherine Emma Jones (University of Luxembourg), From Usability Testing and Text Analysis to User-Response Criticism, approached access from the perspective of user’s response to interface usability tests, with the aim of going beyond usability-oriented interpretations. Considering that digital tools for text as a network, textometry and sentiment analysis may support this kind of enquiry, data from different usability cases were analysed and a typology of users was derived. The results of the study suggest that combining textual analysis with elements from the theory of aesthetic response may foster new paths of enquiry on the user’s self-projection in the digital space and address humanities-related questions such as: what types of reflection are expressed in the response (general beliefs, own experience, experiment-based); whose perspective is represented when the user says I (real person/user, generic user) or you (him/herself, generic user, designer, observer of the experiment, software, computer); how can the user’s affective involvement be interpreted in the experiment versus the larger technology-related context?

 

Collaboration and infrastructure

Another facet of access was considered by Lisa M. Snyder and Alyson Gill from the University of California, Los Angeles and the University of Massachusetts, Amherst, USA, in their paper Cultural Challenges for 3D Research. Elaborating on the outcomes of the 2016 session of the NEH Advanced Topics in the Digital Humanities Summer Institute on Advanced Challenges in Theory and Practice in 3D Modeling of Cultural Heritage Sites held at UCLA, the presentation focused on several categories of challenges to be addressed by people and institutions involved in 3D research and 3D-based knowledge production. For instance, first, metadata should make 3D contents discoverable and (re)usable as research and learning objects, and allow connections among cultural heritage communities. Second, publishing 3D work may require stable platforms, standards for peer-reviewing 3D work, publication prototypes, as well as preservation standards across 3D types. Third, there is a need for an infrastructure for collaboration, dissemination platforms and interactive environments that may provide the basis for acceptance of 3D work, as a discipline on its own, implying changing attitudes at the disciplinary level.

Similarly, Lisa Spiro, Geneva Henry, Toniesha Taylor and Amanda French, from Rice University, George Washington University, and the Prairie View A & M University, USA, in their presentation Establishing a “Resilient Network” for Digital Humanities, pointed out the necessity of a stable cross-institutional framework in Digital Humanities to facilitate access by collaboration among institutions, building expertise and communities, and creating collaborative networks and cross-institutional teams. Since such an initiative had to face a series of challenges, as different institutional platforms, procedures and capacities, and the creation of a shared curriculum, the speakers recommended a set of key activities, e.g. extra-institutional workshop sessions, awards in travel and training, new jump-start projects, guidelines writing for other institutions to join, in order to establish a networked model for supporting and sustaining teaching and research in Digital Humanities.

Addressing the question of Collaborative Writing to Build Digital Humanities Praxis, Brandon Walsh from the University of Virginia, USA, focused on the collaborative writing of Open Access materials as a form of praxis-oriented pedagogy. This may involve not only teachers but also students that can produce and revise each other’s work and contribute meaningful conversation in the Academy. The proposal was illustrated by a case study, an open coursebook, Introduction to text analysis, written with students readers in mind, and proposing a series of exercises for practicing the acquired concepts and skills, and inviting students and other collaborators to discuss, produce and revise shared educational content.

Tools

Access was also reflected on from the viewpoint of building tools and digital environments for teaching and research in Digital Humanities. For instance, Susan Schreibman, Constantinos Papadopoulos, Brian Hughes, Neale Rooney, Colin Brennan, Fionntan Mac Caba, and Hannah Healy, from the Maynooth University, Ireland, in their paper Phygital Augmentations for Enhancing History Teaching and Learning at School proposed History in a box, a learning environment making use of physical and digital sources as a new way of teaching history and of pedagogical use of technology among digital natives. Winfried Höhn and Christoph Schommer, from the University of Luxembourg, presented RAT 2.0, a Referencing and Annotation Tool for place markers detection, geo-referencing support, and text detection and recognition on historical maps. Another proposal was Linked Places: A Modeling Pattern and Software for Representing Historical Movement, by Karl Grossner, Merrick Lex Berman, and Rainer Simon, from World Heritage Web, Harvard University and Austrian Institute of Technology, intended to facilitate representation and analysis of geographic movement (people, ideas, cultural practices, commodities) between different places over the course of history. Hatem Mousselly Sergieh, Michael Piotrowski, and Iryna Gurevych, from the UKP Lab, Technische Universität Darmstadt, and Leibniz Institute of European History (IEG), Germany, in their paper EGOlink: Supporting Editors of Online Historical Sources through Automatic Link Discovery, featured EGO (European History Online), a Web-based publishing platform for academic articles on the history of Europe including a module for (semi-)automatic linking of articles inside the collection and with external resources. Focusing on the annotation of letters from the Willa Cather Archive (WCA) in their Annotonia: Annotations from Browser to TEI, Gregory John Tunink, Karin Dalziel, Jessica Dussault, and Emily Rau, from the University of Nebraska-Lincoln Libraries, USA, proposed Annotonia, a tool allowing editing annotations directly in the browser, storage and review by multiple editors, as well as export to TEI P5 XML format. Dedicated to Humanities scholars and students interested in text analysis, Lexos: An Integrated Lexomics Workflow, presented by Scott Kleinman and Mark LeBlanc from the California State University: Northridge and the Wheaton College, USA, showcased a set of tools designed to guide the users through a workflow for the preparation of textual corpora intended to computational text analysis.

 

Methods

The methodological facet of access was represented as well. Dominic Forest, Vinh Truong, and Lemay Yvon, from the Université de Montréal, Canada, in De quoi est-il question dans le discours en art contemporain? La fouille de textes appliquée à l’art contemporain dans les centres d’artistes demonstrated how topic modelling methods can be applied to study the evolution of topics in contemporary art corpora. In Measuring completeness as metadata quality metric in Europeana, Péter Király, from Gesellschaft für wissenschaftliche Datenverarbeitung mbH Göttingen, Germany, proposed a methodological framework for measuring metadata quality and pointed to the need for data checking and normalisation, and data completeness as approached by dedicated organisms such as the Europeana Data Quality Committee working on data quality aspects and focusing on cultural heritage scenarios. Sven Buechel, Johannes Hellrich, and Udo Hahn, from Jena University, Germany, in their paper The Course of Emotion in Three Centuries of German Text—A Methodological Framework, featured a methodological framework based on the three emotional dimensions model VAD (Valence, Arousal, Dominance) for analysing affective information and its evolution in non-contemporary historical and literary texts. In Twitter Comme Source Pour l’Histoire Du Temps Présent : Le Référendum Grec De 2015 Comme Etude de Cas, Sofia Papastamkou, from the Maison européenne des sciences de l'homme et de la société, France, enquired on the use of tweets as primary sources for historians and historical global analysis, addressing questions such as: How soon is now? Where to draw the line between now and the recent past? What does the twitter network signify and what does it tell us about a particular historical event like, for instance, the 2015 Greek referendum? Focusing on conceptualisation of space in storytelling, Towards a Digital Narratology of Space by Gabriel Viehhauser-Mery and Florian Barth from the Universität Stuttgart, Germany, illustrated a methodology and workflow for annotation (tokenisation, lemmatisation, part of speech tagging, named entity recognition), visualisation and word-list-based frequency analysis of spatial information extracted from a literary text, aiming at modelling space in digitised narratives. Jackie C.K. Cheung, from McGill University, Canada, in Unsupervised NLP for conceptual analysis of events (p. 6-7) talked about unsupervised natural language processing methods that allow learning characterisation of data without pre-existing annotations, e.g. learning clusters of words behaving in a similar way and event structure, with applications in conceptual analysis for Digital Humanities.

 

Models and Concepts

Other aspects of access were addressed by a variety of presentations centred on conceptual matters. In Modelling computer assisted conceptual analysis in text (p. 2-4), Jean-Guy Meunier, from the Université du Québec à Montréal, Canada, talked about conceptual analysis, a common practice in philosophy, linguistics, human sciences, law and journalism, and two types of computer-based approaches for conceptual analysis, tool-driven and model-driven, as objects of scientific enquiry. Janneke Van Der Zwaan, Wouter Smink, Anneke Sools, Gerben Westerhof, Bernard Veldkamp, and Sytske Wiegersma, from the Netherlands eScience Center and the University of Twente, The Netherlands, proposed Flexible NLP Pipelines for Digital Humanities Research, a model for combining natural language processing tools and defining workflow steps by means of a NLP Pipeline package and the Common Workflow Language (CWL). In Data Visualization in Archival Finding Aids: A New Paradigm for Access, Anne Bahde and Cole Crawford, from the Oregon State University Libraries and Press, and Oregon State University, USA, combined theoretical and practical considerations and argued for the integration of multiple models of visualization with both archival finding aids and discovery systems in order to enhance access to archival collections.

The closing plenary lecture, The Upside-Down Politics of Access in the Digital Age?, was held by Elizabeth Guffey, director of the Masters Program in Modern and Contemporary Art at the State University of New York, Purchase College, USA. Inspired by Vic Finkelstein’s essay To Deny or Not to Deny Disability - What is disability?, the lecture was articulated around three questions. 1. What is access for disabled people in the digital realm? Usually, there is a natural assumption that all that is digital may be available for everyone. However, this assumption seems to be just a question of awareness, since environment, including the digital, can make people more or less disabled, and abilities may affect digital access at a visual, auditory, ambulatory and cognitive level. 2. Why is this so little known? Citing Tanya Titchkosky’s The Question of Access: Disability, Space, Meaning, the keynote pointed to the need of thinking about access in a different way, by including the perspective of disabled people, generally hard for non-disabled people to understand. 3. Can this change? The final remarks underlined the importance of teaching access and of thinking about Web accessibility as teachers, in order to foster more dialogue and a participatory, collaborative and more comprehensive framework for access in Digital Humanities.