Last night Miriam Posner gave an interesting talk in the “Exploring Digital Humanities” series at the University of Colorado Boulder that explored the unease that humanists often feel when their materials are described and treated as “data.” The creation of data requires careful categorization so that the materials in question can be counted and queried, but really good scholarship in the humanities, she pointed out, seeks to break received categories. Certainly this has been how I’ve understood my own work – as a sustained attack on the binaries that structure the study of the Bronze Age – but I nevertheless found that I didn’t have as much of a problem with understanding my materials as “data” than many in the audience seemed to.
Maybe this is because I’m not much of a humanist – my theoretical inclinations have always tilted towards the social sciences -, maybe it’s because I’m an archaeologist and archaeologists seem to be more comfortable with the notion of “data,” especially as field teams have grown in size and the number of specialists required to run an archaeological project has increased. These specialists and team members produce interpretations and materials that need to speak to one another, and here digital tools are invaluable (as many of the contributions to the excellent Mobilizing the Past for a Digital Future emphasize).
Prof. Posner’s talk began (sort of, I was late since I came straight from class) with the Culturomics paper in Science, published in 2011 (which, I am embarrassed to admit, I had never heard of), and she focused on what I might call the “front end” of a digital humanities project: taking the mass of cultural materials and making them amenable to the structure of a database. What I found more objectionable about Culturomics, on the other hand, was the “back end”: the interpretations produced by Culturomics’ quantitative analysis. That is to say, once the data have been carefully cooked (since we all know that raw data do not exist) and analyzed, there is a tendency for the interpretation to follow simply and directly from whatever numbers are spit out. For instance, the conclusion drawn from these data
is that “in the battle of the sexes, the ‘women’ are gaining ground on the ‘men’.” Is this meant to be serious? It’s certainly presented as such but it’s hard to believe that anyone would say this with a straight face about “a new type of evidence in the humanities.”
For me, this is a good illustration of the worst kind of research (in the humanities or not). Good research requires time, careful thought, and most of all, a real and sustained passion for materials (or data, or evidence, or whatever you want to call them). We need to spend thousands of hours with our materials before we can make them sing, just like a musician or an athlete needs to practice and practice and practice. In that way academic work is like a craft: ideas are well and good, but we need to work with materials and follow many dead ends before we can make our ideas do work. Any research that involves taking 5 to 50 seconds to come up with an interpretation is (usually? always?) bad research. (Also, where’s the fun in coming up with a dumb interpretation that didn’t take you any hard work?)
For reasons that I don’t really understand, it seems to me that there is a market for this kind of work (regardless of whether it’s digital or analog). In Greek archaeology, my field, the equivalent seems to be something like, “Look, I excavated this temple, and I think it’s this temple mentioned in this Classical text. The end.” That’s fine as far as it goes – it’s not the worst thing to try to connect material culture to texts – but it’s not really a conclusion as much as it is a banal observation. And it seems odd to me that so many people seem to want to take shortcuts, to make interpretation easier, when in fact it should be hard. Digital tools give us the opportunity to make sense of more and diverse materials, to integrate them and to let them communicate – but none of that makes interpretation any easier. In fact, it can make it harder: harder, for example, to ignore evidence that doesn’t agree with our interpretation. And that’s good. It’s supposed to be hard.