Hi, I'm Jason. I am the Academic Technology Specialist in the Department of History at Stanford University and a historian of the American West. I write here about history, technology, and coffee. You can learn more about me, browse the archives, subscribe to my feed, or follow me on Twitter.
May 20, 2013
On the evening of May 30, 1998, I was laying in bed listening to the local radio. About an hour and a half or so before, I was driving with my dad on South Dakota Highway 37 north towards Huron, South Dakota. We were heading for the dirt track races, but turned around after the races were canceled due to severe weather in the area. That night, the news on the radio was terrifying: the small town of Spencer, South Dakota, which lay just twenty miles from my hometown, had been destroyed by a tornado. The storm was among the most destructive in South Dakota’s history. For the next few weeks I collected every front page story my hometown paper ran about the tornado; I still have those newspapers stored away in a box.
The United States faces more tornadoes than any other country in the world, averaging around 810 every year. The Spencer tornado occurred during one of the worst tornado years on record, which saw 124 recorded tornados.1 Moore, Oklahoma, lies within the heaviest tornado activity in the United States, an area known as Tornado Alley that stretches across the Great Plains. The phrase comes from Air Force meteorologists Major Ernest Fawbush and Captain Robert Miller, who coined the phrase “tornado alley” in 1953 during their research studying severe weather in the central plains.2 Although every state has the potential to experience a tornado, the storms are heaviest in the land between Texas and South Dakota between the Rocky Mountains to the west and the Missouri River to the east.
Growing up on the Plains, severe weather was a common occurrence between the months of May and September. Warm coastal airs blow up from the Gulf of Mexico northward and collide with dry, colder continental air coming from the Rocky Mountains, creating a volatile climate that is the seedbed for supercell thunderstorms. Today’s tornado that hit Moore, Oklahoma, and the haunting stories and images emerging from reports reminded me of the destructive force of the Spencer tornado. The Spencer tornado clocked wind speeds upwards of 246 miles per hour. Preliminary measurements of the Moore storm measured wind speeds at 199 miles per hour.
Today’s storm was not Moore’s first encounter with strong tornadoes. Thirteen years ago, another intense tornado swept by Moore, with wind speeds peaking at 302 miles per hour. Although wind speeds were lower in today’s storm, the storm’s destructive potential was amplified by its 40-minute presence on the ground; tornadoes are often on the ground only for moments, but today’s mile-wide and long-duration storm tore a deadly path of destruction across Oklahoma. The American West often sees extremes in weather and climate, sometimes with deadly consequences.
My best wishes for safety and recovery go to the city of Moore and the state of Oklahoma. To those injured, I wish speedy recovery; to those lost, my heartfelt condolences.
There are several resources covering the storm and its aftermath:
The Red Cross is accepting donations via text message. You can text REDCROSS to 90999 and you will be billed $10 as a donation. The government of Moore is also posting updates on Facebook.
Note that tornadoes were not rigorously recorded until after 1953.↩
See John P. Gagan, Alan Gerard, and John Gordon (December 2010) “A historical and statistical comparison of ‘Tornado Alley’ to ‘Dixie Alley’,” National Weather Digest, vol. 34, no. 2, pages 146-155. [PDF]↩
April 28, 2013
[This post originally appeared at Hive Talkin’ on 2013-04-18.]
In 1814, Thomas Jefferson offered his personal library – some fifty years in the making – to the newly established Library of Congress. His library of works on philosophy, history, science, and literature were meant for “everything which related to America, and indeed whatever was rare and valuable in every science.” Jefferson loved knowledge, and the donation of his private library to the Library of Congress allowed the new library to “become the depository of unquestionably the choicest collection of books in the US, and I hope it will not be without some general effect on the literature of our country.”
Today marks the beginning of a new effort at “the choicest collection” of material in the form of the Digital Public Library of America. The conversation began two and a half years ago, when forty-two American libraries discussed where their future lay in the digital world. The answer to their question: a digital public library, national in scope and delivering open content. DPLA solves an issue with digital archives, libraries, and museums by bringing them all together and facilitating the discovery of images, photographs, artwork, and published and unpublished material. The mission, as described by DPLA, is to bring together “the riches of America’s libraries, archives, and museums, and makes them freely available to the world” and “expand this crucial realm of openly available materials, and make those riches more easily discovered and more widely usable and used.”
DPLA not only makes available the material, but becomes an advocate for the open access of material. DPLA contains around 2 million documents so far from collections at the Smithsonian, the National Archives, the New York Public Library, Harvard, and the University of Virginia, as well as regional libraries like the Mountain West Digital Library and the Minnesota Digital Library. DPLA brings this material together in an interface that makes discovery easy.
I’ve been playing with DPLA all morning, and there are three areas of DPLA I’m thinking about today:
Without a doubt, access to material in DPLA presents a wonderful pedagogical opportunity. Material that could have been difficult to access or unknown to students now has the ability to be discovered. Research projects that require working with primary sources now have another starting point for research, in addition to the tools we already use in our work. A research project on American civil rights movements might start with a keyword search on Malcolm X, which pulls in sources and metadata from collections at Yale, Utah State Historical Society, the University of Illinois-Urbana, and elsewhere.
Search terms can also be explored spatially by narrowing on the location or viewed on a timeline for chronological exploration.
Discovery doesn’t have to be serendipitous either. Courses on the history of civil rights, women, and Native America, for example, can consult exhibits already built within DPLA. Exhibits contain narratives about thematic historical moments and pull in primary source material related to the themes.
DPLA has already been giving a place in my bookmark folder that contains my go-to places for research, alongside ProQuest, Google, and JSTOR. DPLA serves a similar purpose as research databases by aggregating diverse collections together under a single interface. Scholars and students alike have a variety of methods for drilling down into the material by narrowing on categories like location, date, and repositories. Subject tags also allow me to explore other avenues and similarly-categorized material.
Already, in the short amount of time I’ve spent with the DPLA, I’ve explored some of my dissertation research into the urban history of the Santa Clara Valley and uncovered period-specific maps that I’ll find useful in understanding how urban space grew and developed over time. I look forward to spending more time with DPLA as a research tool, trying out different queries and following subject tags to see what I uncover.
I haven’t dug deeply into this yet, but DPLA provides an API for those who want to build platforms to take advantage of the DPLA’s collections. Harvard and Europeana have already built impressive apps using the API. DPLA sees itself as a platform, and developers have an opportunity to make use of the available data and explore cultural heritage collections. DPLA itself is open source, and has placed all of its code on Github already.
There’s a lot of promise behind DPLA already in the early stages of aggregating our digitized cultural heritage collections, promoting public engagement, pooling metadata, and supporting methods for accessing the material. I’m looking forward to seeing more and more collections brought into the fold and broadening our ability browse, read, and create.
April 5, 2013
Next week, April 8th-12th, William G. Thomas and Patrick Jones will be hosting virtual sessions on History Harvest for anyone interested in learning more. I had the good fortune of participating in the first History Harvest in Lincoln, Nebraska, in 2010, and have watched the program grow over the last couple of years. And for all of the talk surrounding MOOCs, it’s refreshing to see alternative approaches to utilizing technology that better serves students (indeed, the Chronicle has called this a “new kind of MOOC”).
The plan for next week’s Blitz Week is to open participation into the strategic planning of History Harvest and offering an overview of the project generally. If you’re looking to learn more about History Harvest generally, I’d check out the official site or this piece in Perspectives. Professor Thomas details the plans behind Blitz Week here and here.
I love the idea behind History Harvest and the accompanying research, pedagogical, and public history aspects embedded in the program. The community-driven aspect really resonates with me, and the technology available to us – digital cameras, audio recorders, digital archival platforms like Omeka – means we have tools available to conduct and deploy such projects. Pedagogically, History Harvest provides an opportunity to have undergraduates and graduate students involved in every aspect of the project. The work happens outside the classroom and gets students and the community working hands-on with the collection, preservation, and interpretation of history – plus, there’s an added advantage of equipping students with digital skills. I really enjoyed my experience with the project, and would readily welcome an opportunity to participate in future History Harvests.
March 26, 2013
[This post originally appeared at Digital History@Rice. Thanks to Caleb McDaniel for his invitation to participate in the course!]
Regretfully, I could not attend Caleb McDaniel’s video chat in his Rice masterclass on digital history. However, I’m happy I have the opportunity to answer the questions posed to everyone during the chat.
1. Introduce yourself and tell us a little bit about your individual research interests.
I am a historian of the twentieth century American West and digital historian. My Master’s thesis studied how mass media covered the Trail of Broken Treaties in 1972, a Native American protest that included marching on Washington D.C. and occupying the Bureau of Indian Affairs for seven days. Early in my Ph.D. program, I was hired to serve as the project manager on the William F. Cody Archive at the Center for Digital Research in the Humanities. I’m currently working on a born-digital scholarly article about Cody and Native Americans hired to perform in Buffalo Bill’s Wild West.
My primary research now is my dissertation. My project is about technology and urban change in California and Utah. These two locations underwent the move into the knowledge economy between 1950 and 1970, most prominently by serving as two of the original four nodes part of ARPANET. In particular, my project traces the spatial politics of social and cultural change in the second half of the twentieth century and how the burgeoning digital economy played a role in the process. Activists, politicians, business owners, and residents shaped the political and economic structure of changing metropolises, which highlights contests over space at a moment in time when the information sector was exerting enormous change upon regions.
2. You now have a job that some would classify as “alt-ac,” a job that explicitly engages with digital humanities or digital history as a field. Can you talk a little bit about how you ended up in this position?
I was fortunate enough to be at a graduate school that takes digital history seriously, and had the opportunity to work with big names in the field. My decision to attend UNL for graduate school was, in part, driven by my desire to engage with digital history. While at UNL, I had many opportunities to learn skills and participate in the planning, implementation, and management of digital projects – starting with my own research projects, then helping with research on larger research initiatives, and finally becoming project manager for the Cody Archive. As I mention below, I self-taught myself a lot of technical skills but also took all the courses that UNL offered in digital humanities that helped me understand the history of DH, what DH was trying to do, and the big theoretical and practical issues DH faces.
My experience gained in graduate school helped put me in a position where I was a competitive candidate for my current job. In January 2013, I joined Stanford University as an Academic Technology Specialist. My position is housed within Stanford University Libraries but I am embedded in the Department of History, where I work with faculty and graduate students on integrating technology into their research and teaching and also serve as an advocate/evangelist for digital humanities. I have a unique mix of tasks that I pursue, from developing and designing digital projects to planning and running workshops. I also have opportunities for classroom teaching. I’m still new to the position, but I’m already starting to have projects coalesce into something I serve as a strategic ally in fostering the vision of projects.
For those looking into jobs in digital humanities, there are a lot of opportunities to see the kind of work that’s out there and the sort of skills employers expect from potential candidates. Sites like Digital Humanities Now, HASTAC, and code4lib frequently post job listings (I found my ATS job through DH Now, for example). Job postings frequently pop up on Twitter as well. Graduate school doesn’t necessarily prepare you for these kind of jobs in the way it’s structured – the assumption is most graduate students will go on to professorial jobs, so programs are designed around teaching and research. Paying attention to job listings will give you a sense of what’s going on in digital humanities and the kinds of skills are needed to do this work. And if you’re finding you’d like to work in DH, try and find an ally in your department that supports your ideas and vision for your career.
3. How much familiarity with digital tools and/or coding did you have before starting graduate school? How did you acquire the technical computing skills you now have?
I’ve always wondered if I’m something of an edge case in digital humanities. I have a long history with technology and, all through high school, thought I would be studying computer science in college. But I was always interested in history as well, and it won when it came to deciding what I wanted to do in college. I liked the idea of teaching and research, deciding at the time that being a professor was something I wanted to do with my career. Not once in college did I really do much with computers, aside from some work as a freelance web designer (and being a fervent listener of Leo Laporte’s netcasts).
Part of my acquisition of tech skills was intentional and stems from high school: I built computers, I took computer classes, I played with the BASIC and Visual Basic computer languages, I taught myself HTML. But coming into graduate school, I wasn’t up to speed on modern languages or web development – I had never used CSS before graduate school, never wrote a line of code in Python or PHP, and had no idea what vim was. My mentor at my undergraduate institution talked to me one day about this emerging method of digital history, and I was intensely interested in this marriage between the two things I was passionate about. The University of Nebraska-Lincoln was my first choice in schools, not only because of their excellent reputation in the history of the American West, but also because they had an established presence in digital humanities. At UNL is where I picked up most of my skills: I was thrown into the briar patch of PHP, had a crash course in MySQL, took Stephen Ramsay’s course on Ruby programming, was tasked with building an iOS app in thirty days (with no prior Objective-C knowledge) for DH seminar, and enrolled in every available course in digital humanities. Most technical skills were self-taught, others were formal instruction. The web is a wonderful resource when it comes to learning technical skills, and DHers frequently share tutorials and ideas with one another through blogs.
4. Talk a little bit about the place of digital humanities in the broader department or University you are in. What kinds of institutional support does the digital humanities have there, and what are the primary challenges (from an administrative or resource point of view) that you face?
I don’t know that I can speak to challenges quite yet since I’m still wrapping my head around things here at Stanford, but the support for digital humanities is fairly extensive. The Spatial History Project emerged originally from the History Department to support Richard White’s research for his new book. The department maintains that close relationship with what is now called the Center for Spatial and Textual Analysis (CESTA), which houses the Spatial History Project, the LitLab, and Humanities + Design. Stanford University Libraries also supports Digital Humanities Specialists that work on various faculty-led projects like the recently-released City Nature. There is a lot of interest in my department about digital history, so I’m in a great position where I can help foster collaboration and ideas. In other words, there is a lot of support – institutional and otherwise – for digital humanities at Stanford.
5. Collaboration and “open access” seem to be “things digital humanists like” (to paraphrase Tom Scheinfeldt). Do you agree? Are these values specific to digital humanities or are do you think they should have a broader reach?
These are definitely things I like. Collaboration gets a lot of love in digital humanities, and rightly so – but it’s useful to bear in mind that collaboration has always been central to our work as scholars. Nothing we do happens in isolation – we work with colleagues, librarians, archivists, and so on. What digital humanities does, I think, is makes these collaborations more visible while also expanding the sorts of collaboration we do. Not only is collaboration intellectual and methodological, but DH adds the dimension of technological collaborative work as well.
I think about open access a lot, which stems in part from a desire to have our work be public-facing. I believe that anyone seeking information and knowledge should have the potential to access that material. Furthermore, if a scholar has received public funding, I believe there’s an obligation to release the work as open access – to give back as a public good.
Furthermore, if historians care about the quality of historical content on the web – where research often begins, not only among our students but among ourselves as well – then we need to contribute to the creation of scholarly work that’s accessible in that medium. “If historians believe that what is available free on the Web is low quality,” Roy Rosenzweig has noted, “then we have a responsibility to make better information sources available online.” Let’s do the work to get good content out there.
6. One of the topics that came up frequently in our Google Hangout was the future of digital publishing and peer review. From your vantage point, where do you think academic publishing is headed, and what seem like the most promising ways forward?
I have a longer blog post I’m working on to discuss this question and trace the evolution of digital history publishing, but I spend a lot of time thinking about this issue. The short version: digital publishing is stagnant among traditional publishers, both in terms of born-digital publishing and in the evaluation and promotion based on digital work. To that end, Doug Seefeldt, Alex Galarza, and I are trying to get the American Historical Association to address digital publishing and peer review. In response to an open letter we sent to the AHA, they’ve set up a Task Force to look into the matter. I have also been part of UNL’s NEH-SUG that has brought editors together to discuss the challenges of digital publishing and peer review.
Despite some experiments with digital publishing supported by traditional publishers, notably by the American Historical Review and work they’ve published by William G. Thomas and Edward Ayers and Philip Ethington, very little has been produced over the last decade. Whether this is because scholars aren’t doing much digital work or because publishers have been slow to address publication of digital history I am not sure yet. There are some bright spots: I’m working with a journal publisher right now to publish a born-digital piece of historical scholarship, for example. Other publishers I’ve talked to have expressed interest in digital work, but aren’t quite clear on the best way forward with such work. And there are efforts like PressForward and Anvil Academic that are exploring alternatives to traditional publishing. I’m optimistic that we’ll figured out, but there are a lot of open questions we have to figure out yet (how does the business model work? what does digital scholarship look like? how should it be peer reviewed?)
7. If you could advise an undergraduate or fellow graduate student who is interested in digital humanities or digital history about “where to start,” what would you recommend? (related: where not to start?)
As I mentioned on the Gradhacker #alt-ac podcast: get online. I meant it there in the context of #altac, but it applies to digital humanities as well. The community surrounding digital humanities on Twitter is an amazing group of people who are more than willing to offer ideas and suggestions. Check out the #digitalhumanities tag on Twitter as well to find people. Follow them on Twitter, follow their blogs, see the sorts of projects they’re working on.
If you’re looking for things to read, check out Digital Humanities Now and look over some syllabi for books and articles. And there are many amazing digital history blogs out there to follow that will introduce you to all sorts of exciting projects and ideas.
Although not necessarily specific to DH, but a general word of advice: go buy a domain of your own. Don’t leave your online identity to your department webpage or Facebook or Google. Own your identity, start blogging, start sharing ideas, learn how to set up things on a server securely. Having a server of your own also means you have the flexibility to install things like databases to learn about WordPress or Omeka.
February 24, 2013
For the last couple of years here I’ve tried out the link blogging style popularized by John Gruber’s Linked List. I’m not alone – many other bloggers likewise use this style to add short commentaries to the links they share with their readers. In the case of John Gruber and Jim Dalrymple, the linked blog is part of their business. For me, it was an opportunity to share things I liked or found interesting.
I enjoy the linked post blogging style and I get tremendous value out of the sites that use it effectively. However, I don’t know that it works for my site. And looking at my site analytics, my suspicions are confirmed. Most people don’t come to my site because of a link post, they come for the content.
Chris Bowler, who recently migrated blogging platforms, reevaluated his linked posts and wondered about their utility on his blog. He writes:
I wrote linked posts for two reasons. To share what interests me and to bring attention to the work of others. It’s clear that my Twitter account is a much better place for this sort of sharing, while my own site is a place for content created by me. Content that, God willing, brings value that is more lasting.
My own thinking is going in the same direction. Twitter already serves as a great space for conversations with colleagues and friends, but also works as a place of discovery. Twitter makes much more sense for me to share items, and is also dead simple with the Twitter bookmarklet (unlike my blog, which requires me to futz with Jekyll, write the post, fix any formatting, deploy the update, and double-check that everything worked). Plus, I’ve come to really enjoy using the Save Publishing bookmarklet by Paul Ford, which looks for and grabs strings of text that fit the Twitter character limit. I’m sympathetic to Shawn Blanc’s argument that tweet lifespans are incredibly short, but for a one-off item of interest Twitter works perfectly.
However, I might find a link I feel deserves a longer comment than 140 characters allows. In such case, I’ll still share the link here but not in the format I’ve been using. Instead, I’ll create content around the link – what Ben Brooks called the Kottkeian linked list. Everything that will appear on this site will be an article, while some articles might be specifically about a link. I will still keep the link in question prominent, but no longer in the title as it is now. The old-style linked posts will remain as they are, as will the Archive which separates out all posts that are explicitly linked items from the content items.
I’m not bound to the DF-style linked list. I make no money off this site, and I’m not using linked blogging to maintain consistent traffic (and revenue). Writing here is done purely for my pleasure, but I want the content here to have lasting value. I’m not sure sharing links they way I had been supports that goal. I want this space to support my writing and to use this environment to continue learning how to write.