The other night on Proggit I came across a blog post praising continuous deployment. Continuous deployment is the process of code written for applications are immediately deployed which lowers cycle time and opens up individual initiative. As I was reading the post and doing a bit of my own Googling I began to wonder if continuous deployment worked as a metaphor for digital scholarship. These are my quick thoughts.
At conferences and in discussions with people unfamiliar with digital history, I often talk about the iterative process of digital history – that the finality of a journal article, dissertation, or book could seemingly never exist for digital projects given the nearly unlimited space for data, information, and space that our modern servers and hard drives provide (unlike the analog limitations of ink, paper, and costs). More than one conversation has wandered into whether this sort of scholarship was good for history. After all, if the content of a digital project is continually changing, can a project be reliable for its posterity? The skeptics raise an important point. Indeed, if continual research leads a scholar to draw new or different conclusions than originally posited on a project, should there be some way to archive "old" narratives and analysis for posterity’s sake? I’m sympathetic to this view, especially since one of our goals with digital scholarship should be citable material. We need some sort of stability, though I don’t know exactly what version-controlled digital scholarship looks like yet.
However, I think there is a larger point to be made about continuous deployment in DH. The iterative process allows us to react to new data or interpretations, easily update material, and also unlocks data and knowledge for broad audiences.