Over the course of my training as a digital historian, I have had two opportunities where classroom instruction involved learning a programming language. The first was in Prof. Stephen Ramsay’s Electronic Text course during the fall of 2010 where I formally learned Ruby. The other was this past fall in a digital humanities seminar with Prof. William Thomas where I self-taught myself Objective-C in a month to build an iOS application.
I am, most broadly, interested in this idea of programming in the humanities as separate from Software Studies (Lev Manovich et al.), Critical Code Studies (Mark Marino et al.), and Platform Studies (Ian Bogost et al.) (hat tip to Steve Ramsay for pointing out this distinction to me recently). The digital humanities perspective on code is different, and perhaps this is an area for discussion.
I propose a general discussion about the nature and training of humanities programming:
- How do we help prepare graduate students and faculty in the basics of programming?
- Or, should they be concerned with programming at all (Steve says yes)? Is it just nice to have, or essential? Is programming best left to the professionals?
- What are the benefits and pitfalls of taking the time to learn a programming language (or several)? How do you decide which language is best?
- How might programming knowledge shape careers?
- What mentoring services or formal instruction can be introduced to graduate training in humanities programming?
- How does building shape the way we think, research, teach (see Kathi Berens, e.g.)?
- How failure produces results.