Wednesday, 6 July 2016

The Digital Humanities in Three Dimensions



This post is adapted from a talk I gave to the annual conference of the Australasian Association of Digital Humanities in Hobart on 21st June 2016.  It was a great conference with some great papers, leading to some great discussions.  This particular talk generated perhaps more heat than light, but  the main point seems important to me.



The Digital Humanities is a funny beast.  I tend to think of it as something of a pantomime horse – with criticism, distant reading and literary theory occupying the front end – all neighing and foot stamping at the MLA each year (and in the Los Angeles Review of Books) - while history, geography and library science are stuck in the rear – doing the hard work of creating new digital resources, and testing new tools.  Firmly at the back end of this arrangement, I spend much of my time hoping that the angry debates in the front don’t result in too many ructions behind.

But as a result of this weird portmanteau existence the Digital Humanities – its debates and its aspirations - has been largely about text.  As Matthew Kirchenbaum has noticed – much of it can be found in the English department.  Its origins are always located in the work of Father Busa and its greatest stars from Franco Moretti onwards, keep us focussed on the ‘distant reading’ of words. Indeed, the object of study for most Digital Humanists remains the inherited text of Western culture – now available for recalculation via Google Books, ECCO, EEBO, and Project Gutenberg. 

And because Digital Humanities is being led from areas of the academy that take as its object of study a canonical set of texts (however extensive and contested); we have been naturally led to use tools that privilege text analysis and to ignore methodologies that are focussed elsewhere.  The popular tools on the block are topic modelling of text, and network analysis based on the natural language processing, of text.  This is particularly true in North America where subjects such as geography do not have as strong an institutional presence as in Europe and much of the rest of the world, and where the spatial and sonic turns in the humanities feel less well established. 

This emphasis on text tends to make the Digital Humanities feel rather safer than it should. While the digital humanities is frequently cited for its disruptive potential - its ‘affordances’ – it is inherently conservative about what constitutes a legitimate subject, and has breathed new life into areas that forty years ago lready felt moribund.  The Enlightenment, and the papers of Newton, Austen, Bentham and Darwin – all dead writers of an elite stamp - have been revived and their ‘texts’ been made hyper-available.


In part, this is just about the rhythms of the academy.  The Digital Humanities arose just as post-modernism and second wave feminist criticism seemed to exit stage left, and with them, much of the imperative to critique the canon.  But it is also a result of underlying economic structures and the technologies of twentieth-century librarianship.  We very seldom acknowledge it, but the direction of our work is frequently determined and universally facilitated by the for-profit commercial information sector – by the likes of ProQuest, Google and Elsevier, Ancestry.com and Cengage Gale.  And they in turn are the product of a hundred-year history that has shaped what is available to all researchers in the humanities. 

If you want to know why, for example, The Times Digital Archive was the first major newspaper available online; if you want to know why early modern English books came next; if you want to know why Indian and African and South American literature is not available in the same way, it is down to selections made by these companies, and selections made, not last year, but a hundred years ago.

The current digital landscape is actually a reflection of an older underlying project, and an older technology.  Perhaps the biggest influence on what is available to researchers online – the biggest selection bias involved - is just a ghost of commercially produced, for profit, microfilm.  In other words, we have text because that is what people thought was important in 1906 or 1927, or 1935.  We tend to forget that microfilm was the great new technology of the twentieth century - and was itself part of an apparently radical disruptive intellectual project.  It is worthwhile remembering the details.


In 1906 Paul Otlet and Robert Goldschmidt proposed the livre microphotographique – library of microfilm – as a World Center Library of Juridical, Social and Cultural Documentation.  This was to be the ultimate universal library and knowledge machine – the library at Alexandria made new – and it was made possible by microfilm. 

Later perfected for commercial uses, by the late 1920s, microfilm was the methodology of choice used by the Library of Congress to film and republish some 3 million volumes from the British Library between 1927 and 1935; and in 1935, Kodak started filming The Times on a commercial basis.

And this pre-history of heritage material on the web is relevant for a simple reason.  It costs less than a penny per page to generate a digital image from a microfilm.  It is automated to the point that all you do is feed the reel into a machine wait.  By way of comparison, it costs around 15 pence per page to generate a similar image from a real book – even with modern automation, and three times that again capture a page of manuscript in an archive.  For many of the projects designed during the first decade of the web, it was cheaper to have material microfilmed first, as a first step in digitisation.

Seventeenth and eighteenth century books are available online precisely because the Library of Congress microfilmed them in in the 1920s; and The Times Digital Archive is available because Kodak microfilmed it over eighty years ago. Chinese and Arabic literature is not available in the same way because the Library of Congress and Kodak and their ilk decided it was not important. Pro-Quest, the multi-billion pound corporation that supplies half the material used by Digital Humanities scholars, started as University Microfilms International in 1938. 

In other words, what happened in the twentieth century – the aspiration to create a particular kind of universal library, and to commercialise world culture (and to a 1930s mind, this meant male and European culture) – essentially shapes what is now available on line. This is why most of the material we currently have is in black and white instead of colour.  And most importantly, it is why we have text; and in particular, canonical texts in English.

And if the Digital Humanities was really only the front end of that pantomime horse, this would not be that big a deal.  But the Digital Humanities is also the back end – all the people creating the infrastructure that defines world culture online.  If you ask an undergraduate (or most humanities professors) about their research practises, it rapidly becomes clear that hard copy wood pulp has been replaced by digital materials.  What we study is what we can find on line.  

In part, the selection bias driven by the role of microfilm and the textual bias this implies, just means that like the humanities in general, the digital sort is inherently, and institutionally, Western centric, elitist and racist.   Rich white people produced the text that the humanities tend to study and despite the heroic multi-generation effort that has sought to recover female voices; or projects seeking to give new voice to the poor – from below - this selective intellectual landscape remains.

In other words, the textual Digital Humanities offers a superficial and faux radicalism that effectively re-enforces the conservative character of much humanities research.  The Digital Humanities' problem in recruiting beyond white and privileged practitioners is not just down to the boorish cultures of code – rude male children being unwelcoming -  but a result of its object of study.

All of which is just by way of introducing the real subject of this post - that for us to actually grasp the ‘affordances’ that the digital makes possible we really need to change that ‘object of study’ and move beyond microfilmed cultures.   

And that when we add space and place, time and sound, to our analysis, and when we start from a hundred other places than the English department – from geography and archaeology, to quantitative biology and informatics – we can create something that is more compelling, more revealing and more powerful – and arguably more inclusive and democratic along the way.

By way of pursuing this idea, I want to go through a few of the different ways new tools and approaches create real opportunities to move beyond the analysis of ‘text’ to something more ambitious; and in the process attack that very real inherent bias – and inherent conservatism -  that the ‘textual’ humanities brings with it. 

The rain falls on every head – and I just want explore how we can move beyond the elite and the Western, the privileged and the male. 

In the humanities we think of digitisation of text,but in a dozen other fields, they are digitising different components of the physical world.  And when everything is digital – when all forms of stuff come to us down a single pipeline -  everything can be inter-related in new ways.  The web and the internet simply provides a context in which image, sound, video and text are brought onto a single page.

Consider for a moment the ‘Haptic Cow’ project from the Royal Veterinary College in London.  In this instance they have developed a full scale ‘haptic’ representation of a cow in labour, facing a difficult birth, which allows students to physically engage and experience the process of manipulating a calf in situ.  Imagine this technology applied to a more historical event, or process, or experience.  It suggests that the object of study can be different, and should include the haptic - the feel and heft of a thing in your hand.  This is being coded for millions of objects through 3d scanning; but we do not yet have an effective way of incorporating that 3d encoding into our reading of the past.  

And if we can ‘feel’ an object, it changes how we read the text that comes with it; or the experience that text encodes. The world would look and feel very different if we organised it around those objects – the inherited texts attached to them perhaps - but those objects’ origin and materiality forming the core of the meaning we seek to interrogate.  We can use the technology to think harder about the changing nature of work, or punishment, the ‘feel’ of oppression and luxury. Museums and collections - the catacombs of culture - are undoubtedly just as powerfully selective and controlling as the unseen hand of the publishers and archivists; but in stepping beyond text, we can hope to play the museums off against the text.


The same could be said of the aural - that weird world of sound on which we continually impose the order of language, music and meaning; but which is in fact a stream of sensations filtered through place and culture.  For people working in musicology there feels to be a ‘sonic turn’ in the humanities, but most of us have paid it little heed.

There are projects like the Virtual St Paul's Cross, which allows you to ‘hear’ John Donne’s sermons from the 1620s, from different vantage points around the yard.  Donne is a dead white man par excellence, but the project changes how we imagine the text and the event.  And again begins to navigate that normally unbridgeable space between text and the material world to help give us access to the experience of the beggar in the crowd, of women, children and the historically unvoiced. 

For myself, I want to understand a sermon heard in the precise church in which it was delivered; a political speech in the field, or parliamentary chamber; or an impassioned defence in the squalid courtroom in which it was enacted, or under an African judgement tree – with the weather and the smell thrown in.  And I want to hear it from the back of the hall, through the ears of a child or a servant.

This would help challenge us to think harder and differently about text that purport to represent speech, and text that sits between the mind and the page.  Recorded voice – even in the form of text – is inherently more quotidian, is inherently more likely to give us access to the 90 percent of the population whose voices are recorded, but whose 'text' is not.  Text recording speech is different to text produced by the elite power users of the technology of writing – who write directly from mind to page.  This at least shifts us a bit – from text, to voice.

Similarly, in the work of people such as Ian Gregory, we can see the beginnings of new ways of reading both the landscape, and the textual leavings of the dead in the landscape.   His projects on mapping the Lakeland poets; and mapping 19th century government reports, imply a new and different kind of reading.


What happens to a traveller's journal when it is mapped onto a landscape? What happens to a landscape painting when we can see both its reference landscape, and the studio in which it was completed?  What happens to even text, when it is understood to encode a basic geographical relationship?  How do we understand a conversation on a walk when we can map its phrases and exchanges against the earth’s surface?  And what forms of analysis can undertake when each journey, each neighbourhood, each street and room, are available to add to the text associated with them? 

The rain falls on every head.

All of which is to state the obvious.  There are lots of new technologies that change how we connect with historical evidence – whether that is text or something more interesting; and that we increasingly access it all via that single remarkable pipeline that is the online and the digital.   

But it strikes me that adding these new dimensions to the object of study allows us to do something important.  I have spent the last thirty-eight years working on a ‘history from below’ focused on the lives of eighteenth century London’s working people.  And what I want to suggest is that these new dimensions and methodologies actually make that project fundamentally more possible; and by extension makes the larger project of recovering the voices and experience of the voiceless dead, more possible.  When you add in the haptic, the mapped and the geographical, the aural and the 3D, what you actually end up with is a world in which non-elite – and non-western - people are newly available in a new way.  You also move from a kind of history as explanation, to history as empathy – across cultures and genders, across time and space.

Sound and space and place, are fundamentally more intellectually democratic than text.  90% our inherited canon is inherited from rich dead white men; and yet the thronging multitude who stood in St Paul’s Churchyard; the quotidian hoards who walked through the streets and listened to the ballad singers, experienced something that we can now recover.  The sound of judgement as experienced by the women and men who stood trial at the Old Bailey, and their voice of defiance, can be recovered.  And even the cold and wind of a weather that can now be captured day by day for a quarter of a millennium; can be added to the democratic possibilities new digital resources allow.  Add in the objects in the museums, the sounds of the ships, and their course through the oceans; the measurable experience of labour, and imprisonment, the joy of music and movement, the inherited landscapes, bearing all the marks of the toil of the voiceless dead, and you end up with something new.  The material world – in digital – gives us access to the rest of the world, and begins to create tools that speak to the 99% of the world’s population who, in 1700 or 1800 did not read or write, and did not leave easy traces for us to follow.  The Digital Humanities in Three Dimensions, challenges us and empowers us, to write a different, more inclusive, kind of history.

The rain falls on every head.   

Monday, 6 July 2015

Sources, Empathy and Politics in 'history from below'.



This post was commissioned for inclusion in an online symposium on 'history from below' over at the Many Headed Monster, and is best read in conjunction with the other pieces posted there.  I am reposting it here just by way of keeping track of stuff.  



The purpose and form of history writing has been much debated in recent months; with micro-history, and by extension history from below, being roundly condemned by historians Jo Guldi and David Armitage as the self-serving product of a self-obsessed profession.  For Guldi and Armitage the route to power lies in the writing of grand narrative, designed to inform the debates of modern-day policy makers – big history from above.   Their call to arms – The History Manifestohas met with a mixed reception.  Their use of evidence has been demonstrated to fall short of the highest academic standards, and their attempts to revise that evidence sotto voce has been castigated for its lack of transparency.  

Regardless of the errors made along the way, of more concern to practitioners of ‘history from below’ is Guldi and Armitage’s assumption that in order to influence contemporary debate and policy formation we should abandon beautifully crafted small stories in favour of large narratives that draw the reader through centuries of clashing forces to some ineluctable conclusion about the present.  I have no real argument with the kind of history they advocate – and the success of recent works such as Thomas Piketty’s Capital, suggest that it can both do justice to the evidence, and contribute modern policy debate.  And I am sure with a couple of decades’ hard work (there were 19 years between the publication of the Communist Manifesto, and Das Kapital), Guldi and Armitage will produce a book that lives up to the hype.

But, they fundamentally miss-represent the politics of history writing, and of micro-historical analysis in particular.  And what they seem to miss is a simple appreciation of the shock of the old.  The lessons of history are very seldom about ‘how we got here’ with all its teleological assumptions, but more frequently about how we can think clearly about the present, when we cannot escape from it.  

Understanding classical Greek attitudes to sexuality; Tokugawa Japan’s system of governance, or the use of concentration camps in the Boer War is not about grand narrative, but the interrogation of difference.  What the past has given us is an ‘infinite archive’, reflecting a real – if not fully knowable – world.  By interrogating that archive, we are freed to test our assumptions about the present.  In a scientific mode, we might literally test a theory against the evidence; but just as valid, in a humanist mode we can interrogate a word, a phrase and emotion for its meaning.  In either case, history rapidly becomes a tool to think with – testing and probing the past because it allows us to think about the present more carefully.  

For this purpose, for the purpose of thinking with history, the precise topic of historical analysis is secondary, and ‘grand narrative’ is counterproductive.  In part, grand narrative doesn’t work for this purpose because it is inherently teleological, and brings with it ill-digested assumptions about how human society functions.  One need look no further than the facile accounts of empire found in the work of historians like Niall Fergusson to see the pitfalls; or the risible nationalist diatribes of ‘Historians for Britain’ collective.  If you start with a ‘dog in the fight’ – a defence of American ‘empire’; or an anti-EU agenda - your ability to see clearly is at least compromised.

‘History from below’, by contrast appeals to a very different kind of politics; and it is in essence, a politics of empathy and voice explored through a conversation with the dead.  In the British Marxist tradition, it was founded in the creation of a humanist account of the ‘radical tradition’ that gave to every stockinger and handloom weaver an identity and personality.  The politics of this tradition was found in the demand that the reader empathise with individual men and women caught in a whirl of larger historical changes, and it was, and is, a politics of emotion.  The methodologies of ‘history from below’ use detail and empathy to demand of readers a personal engagement with a specific time and place; just as micro-histories uses the contrast between the everyday and the remarkable, to force the readers’ engagement.

And as a political project, both ‘history from below’ and micro-histories have been remarkably successful.  The public politics of the west in the last fifty years have been dominated by forms of the ‘identity’ politics.   These new politics have helped to push aside the twentieth century’s disastrous obsession with nationalisms (the focus of both older grand narratives, and the crutch leant on by historians such as Fergusson and ‘Historians for Britain’).  

We now have detailed and beautiful histories of the experience of the enslaved, of people excluded by race, gender and sexuality; by dis/ability and poverty.  Each of these ‘histories from below’ have evolved in dialogue with contemporary politics, both feeding the activism of modern campaigns, and perhaps more importantly, ensuring that no-one can be dismissed as less feeling, less human, less important, than anyone else.  By changing the focus of historical writing and research, ‘history from below’ has effectively eroded the inherently racist notion of the ‘volk’ in favour of ‘leuten’; has eroded nationalisms in favour of individual experience.

In other words, history from below has been a remarkably successful form of cultural politics (and Politics), that owes its basic success to the creation of an imaginative and empathetic connection between the individuals, past and present.  But to achieve this end, history from below has made a further contribution to both historical scholarship and methodology that places it at the centre of a wider set of developments.

Despite the (over) reliance of historians such as Edward Thompson on government spy reports, and many social historians’ addiction to parliamentary ‘blue books’; history from below demands that we seek alternative pathways to knowing about individuals – that we seek out readings that work self-consciously against the grain and documents that, however fleetingly, record the experience from below.  And herein lies the problem and the opportunity.  Our sources create a fundamental tension between the bureaucratic character of most inherited documentation reflecting experience from below (endless lists and accounts), and the political work of history from below as a project – to create empathy across time and space.  The conundrum becomes, how do we turn a name, perhaps a number, if you are lucky, a single line – in to a human being.

In part, the answer to this quandary has been found in family and community reconstruction; in the creation of relational databases that pull together fragments of information from as wide a body of sources as can be managed.  When, for instance, small fragments of narrative sieved from pauper letters and examinations, are combined with details of pensions lists and the raw biology available through the International Genealogical Index, we come close to being able to create compelling simulacra of the dead.   A shared experience of childbirth, or hunger; of disability or simple poverty, can be enough to bring to the readers’ minds’ eye a fully formed human being – all the details filled in via the readers’ imagination.   

But even these limited details are unavailable for many.  So we also use strategies of detailed contextualisation.  In part, these strategies mimic the forms of fiction – where small details are used to compress a scene to it tightest compass.  In history from below, we might use location and the built environment as ways of giving authority to an event that would otherwise be dull and off-putting – one of a million settlement examinations; one of five hundred shared beds in a workhouse.  All of which simply gets us to the point where the form and genre of writing history from below comes in to direct conflict with the sources we normally use, creating a tension which in turn explains why ‘history from below’ has been both remarkably productive in the creation of new methodologies; and why, more importantly, it creates a need to rethink and remake the genre of history writing more broadly.

In other words, in the face of challenges from advocates of ‘big history from above’ it seems to me that we are confronted with a series of opportunities, created by the very practise of writing history from below; that in turn provide the basis for a fuller political agenda.  We have an answer to the siren calls of ‘big history’.  And the answer demands just a few things.

First, we need to be much more sophisticated in how we theorise the process of writing and presentation.  There is currently no-one seriously unpacking the literary practise of historical writing from below in a way that would allow us to examine it as an object of study in its own right.  And yet, by being more self-conscious in how we construct emotion and engagement through textual practise, we can raise our game substantially – allowing us to recognise (and teach) the different techniques we use; and to categorise varieties of history writing in new ways.  And while no one would want to see too much self-obsessed naval gazing, there is a real opportunity for substantial criticism that would in turn allow us to present ‘history from below’ as a more fully described set of generic conventions.  Not perhaps a ‘science’, but a clear methodological choice.

Second, we need to embrace innovation more fully, and to identify the digital tools that allow us to construct lives and experience from the distributed leavings of the dead.  The world of early modern and nineteenth century Britain, in particular, are newly available to new forms of connection.  Nominal record linkage, building on a generation of work undertaken by family historians, should allow us to tie up and re-conceptualise the stuff of the dead, as lives available to write about.  Or we can revolutionise close reading of text through a radical contextualisation of words.  By allowing every single word or phrase to be mapped against everything written in the year or decade – we could create a form of close reading that makes for powerful history writing.  Or, we could think about contextualisation more imaginatively, by adding a few more dimensions to the context in which we place our objects of study.  Where is the 3D courtroom and church pulpit; where the soundscape and sound model; where the comprehensive weather data that would allow us to write a life, an event, a moment in new and different detail?

 And finally, my belief is that we need to be more explicit about the political work that we think ‘history from below’ is doing.  If we think the work contributes to a modern political conversation, I think we need to say so – not to simply advocate for our own beliefs, but to use the past to think more carefully about the present.  From my perspective, it does not matter over much if the thinking is about gender, poverty, race or disability; but about ensuring that a conversation with the dead forms a part of our conversation about the present.  

When the likes of Jo Guldi and David Armitage, and the ‘Historians for Britain’ group advocate for big history and the longue durĂ©e, they are making specific claims about how they can intervene in a modern politics; and effectively denigrating other people’s politics along the way.  It is only by countering these claims, and replacing them with our own more subtle analysis that we can do full justice to the aspirations and labours of our colleagues.  There is a coherent intellectual project in ‘history from below’, that perhaps needs more critical inspection, that perhaps needs more technical innovation, but which nevertheless provides the best opportunity we have to create an inclusive, progressive, empathetic history – a way of thinking clearly with the past.