In Cortex 133, Myke Hurley and CGP Grey have a great conversation about art, artificial intelligence, and their own context as creators. Hurley relays a story of getting solicitations from AI-voice companies who can recreate Hurley’s voice and have the computer then “read” ad copy. And CGP Grey notes that there are, by his estimation, videos he’s seen on YouTube that were certainly AI-generated and designed for viral engagement. But they also raise a lot of important questions about artistic creativity – why commission an artist when a computer can do just as well? What do we start to lose when human creativity is outsourced to machines? Given how fast models like StableDiffusion and DALL-E are developing their AI models, we’re probably not far off from a seismic shift among creative work.
I was particularly drawn to the idea Hurley pointed to about how realistic this fake, AI-generated photograph of a staged moon landing appeared. He even says that if someone showed him that photograph and told him it was real, he’d maybe believe it.1 And it leads me to wonder, as historians, how we can expect to deal with these challenges in the future. There’s already enough challenge in the social media world over what’s fake and what’s not, a trend that’s accelerating with deepfakes and manipulated photographs. And we’re aware of just how bad those history photograph accounts are for attribution, disinformation, and undermining the work of history at spreading historical disinformation. What do we do going forward when photographs and paintings can be not just manipulated, but entirely unreal? Are we prepared to correct facts, and how will we confront these challenges in a world that is increasingly hostile to the humanities?
Perhaps it’s easy to dismiss this concern as me being Too Online. After all, perhaps it’s unlikely that an AI-generated “historical photograph” would ever make it as far as a printed book, nor am I particularly concerned about our own expertise in evaluating sources (which, after all, is something we excel at). In other words, AI-generated historical content may prompt some changes in how we teach and train the next generation of scholars in using digital sources but I suspect our professional standards will help us avoid issues of fake photography influencing the research we do.
But this should still concern us historians. As the American Historical Association has found, consistently, people often turn to the web for historical information. It’s a primary way for people to learn about historical topics, and it’s not that hard to imagine scenarios where fake photographs were to end up on Wikipedia that then reshapes a collective readership’s understanding of a historical topic (after all, there’s a history of, well, fake history on Wikipedia). A viral fake photograph can spread quickly, and social media is terrible at issuing correctives well after a lie has spread halfway around the world.2
I’ve long argued that historians have a role to play in the web. Sure, there’s no shortage of bad history to be found on library shelves, but the discovery and distribution of historical disinformation online is a magnitude larger. We cannot let the History Web be defined by others whose ethics, standards, and rigor don’t adhere to our shared professional discipline. The web makes the distribution of information so much easier, which is both it’s greatest feature and it’s greatest challenge: anyone can put anything online and claim it as True.
To be clear, I love the web.
I wrote about that love of the web not all that long ago. I work professionally on the web and love the chance to get to shape it. I work with organizations who are likewise deeply concerned about the health of the Internet but believe in its promise. So while I have concerns at times about the ease of which disinformation can spread online, I still remain committed to the web as an idea. What I hate is what large companies have done to the web.
I don’t think it’s any great stretch of the imagination to suggest ways that AI-generated historical photographs could be abused in the name of white supremacy, patriarchy, or anti-LGBTQ+ propaganda (in fact, we already see this in digitally manipulated historical photographs). There’s a moment quickly approaching where historians will have to confront this: through our teaching, through our writing, through our public engagement. I have to admit that this is the sort of thing that keeps me up at night. At RRCHNM, we’re in the mission of democratizing history. Within our constellation of work, we produce several projects that digitize primary sources for online use in classrooms, research, and public history. But I can envision a time where AI-generated historical photography could be weaponized against the kind of vetted and sourced materials that we provide. Don’t trust that site, the argument could go, that’s just run by the woke liberal elites. We’re showing you the real history. Reductive, maybe. But those kind of arguments are already happening to disparage the humanities.
I thought back on the Cortex conversation after reading this piece in the Atlantic:
What is so tiresome about the fear of AI art is that all of this has been said before—about photography. It took decades for photography to be recognized as an art form. Charles Baudelaire famously called photography the “mortal enemy” of art. The Museum of Fine Arts in Boston, which was among the first American institutions to collect photographs, didn’t start doing so until 1924. The anxiety around the camera was nearly identical to our current fear of creative AI: Photography wasn’t art, but it was also going to replace art. It was “mere mechanism,” as one critic put it in 1865. I mean, it’s not art if you’re just pushing some buttons, right?
As Alan Jacobs points out, this is lazy pseudo-thinking and he’s right to point out that photography didn’t kill art but it did change art in ways both good and bad.
But AI-generated material, whether voice re-creations, deepfakes, artwork, or photographs, is fundamentally different because it’s creation is different. Certainly an artist can take inspiration from Miyazaki and decide to create a Miyazaki-style Kermit the Frog. But that comes with its own idiosyncricities of design and deliberation and thought and creativity and experimentation that went into the creation of that artwork. Asking a machine to do the same, as a toy, can be fun – but a near-perfect mathematically-rigorous representation of another’s artwork that’s only achievable with machine-precision feels like something entirely different. Even more ethically disurbing, as CGP Grey notes, is the creation of artwork in the style of a famous DeviantArt artist who died of cancer. No consent from him or the family to do so.
A company just decided they liked his artwork, and decided to keep creating it themselves.