Emergence
New work in new media by Deniz Tortum
A Blurring of the Real: Damon Packard, Documentary Archival Footage and Generative AI
A crowd of people, animals and AI-generated beings sleeps in a movie theater. Faces are lit by the explosions of an atom bomb on screen. All we hear is the loud snoring of hundreds of people and the sounds of their bodies moving; a nondescript rodent crawls on the floor. We cut to the 70mm IMAX projection room, where the projectionist is also sleeping. The camera moves very slowly, but each shot morphs within itself at a faster pace. These two visual rhythms are layered on top of each other. The explosions on screen intensify; a panic sets in. We cut outside, and bombs start exploding in the city, submerging the theater and the audience. A YouTube comment below the video gets to the heart of the piece: “Snoring through the apocalypse.”
I came across this AI-generated film when a friend logged it on Letterboxd. Its haunting title, The Sleeping Audience, caught my attention. It was my first encounter with the underground filmmaker Damon Packard. Packard has been making hard to categorize, totally independent films since the 1980s, reaching his audience through self-distribution and also through prestigious festivals such as the New York Film Festival. His films have been called many things, including “experimental,” “cult,” “deranged” and “revolting,” but critic Michael Sicinski’s term “browless”—meaning that his work embraces both high- and low-brow cinematic history—conveys best the overwhelming fullness of Packard’s work.
In another AI-generated video posted on Packard’s YouTube channel, Studio shuts me down, a filmmaker (Packard himself) walks into a studio executive’s office. “No Views You Lose,” reads one LED sign, “Control Algorithms Control the World,” reads another. Studio executives go on a rant about how nobody is paying attention to Packard’s videos and that he is disgracing himself by creating them. Their voices are pitched distinctly, which creates its own sonic composition. Then, we cut to the shots of huge halls full of computers and their operators as Queen’s “Who Wants to Live Forever” plays on the soundtrack. After this final scene, the film cuts to a petition card, followed by an AI-generated image of a little girl handing a flower: “Please donate to Packard Fund for Emergency Car Repairs.” The sharp distinction between techno-utopian AI imaginaries and precarious everyday life is accentuated.
I reached out to Packard to have a brief conversation about his work. When I called him, he had just gotten work done on his car and was on his way to get some food. “It’s become an obsession,” he said about working with AI. He’s been posting AI videos for nine months on YouTube and has been experimenting with many of the platforms: Runway, Pika, Leonardo and Midjourney. (Open AI introduced its new text-to-video model, Sora, the week after our conversation.) I ask him whether he has any insights about creating with AI. “You get different results daily with the AI platforms,” he says. “Sometimes, I can’t get it to do certain things, whereas some days it will give me more interesting things. It’s erratic and unpredictable. It’s like a living thing and has a mind of its own, and its mood changes daily.” He describes his work as almost like a first contact with unknown entities—he’s trying to get to know them. Packard is working with generative AI (GAI) early in its lifecycle, when it is still relatively open. You can generate most things, it’s not prohibitively expensive nor is it licensed only to certain studios. In the future, after copyright regulation, successful topic or material censorship, or other possible constraints, we might look back at this moment and find it was a unique time to be working with GAI.
Packard has posted dozens of GAI videos. Among them are Relationships with Algorithms, where a popular online video about male loneliness creates the base material as the visuals turn into Blade Runner-like dystopian images; and A Day in the Life of John Carpenter, which is a hellish masterclass on making movies. Carpenter lectures from his house, filled with cigarette butts that are slowly multiplying and coming alive. “I get to sit here in the comfort of my home, remote-directing,” he says. The video plays like a nightmarish essay on the future of filmmaking.
In all of Packard’s videos, the images constantly morph and betray a GAI flicker, the current glitch of the system. This is a new movement, a new rhythm that wasn’t present in the cinematic language before. There is something very contemporary about Packard’s films. They are always ahead of you, changing before you realize what you’re seeing. You become overstimulated, helpless, confused—almost like the feeling of doomscrolling. The worlds are always collapsing. The found footage is mixed with the generated, the real is elusive, the films are sleep-talking.
Packard’s method of recontextualizing and remixing pop culture fits well with how the GAI systems work. Packard and the logic of GAI complement each other. His intuition and understanding of AI pushes it into a new territory. His films, authentic and personal, offer a kind of underground AI (or counter-AI) film language, as well as a new way of found-footage filmmaking.
A similar tension between found footage and AI resonates today through parts of the film industry. In late November, the Archival Producers Alliance (APA) released a statement on the potential hazards of using GAI in documentary films. The statement warns against the generation of fake documents and historical imagery.
In a conversation, Rachel Antell of APA elaborates on the statement: “We’ve seen cases of documentaries where archival materials that are found have been integrated with fake archival. ‘This is Martin Luther King on this day and place,’ [the film says], but it wasn’t him in that image.” It is interesting to see Packard’s same subversive methods being unhesitatingly used by mainstream documentaries.
The APA statement also warns that “generated material presented as ‘real’ in one film will be passed along—on the internet, in other films—and is in danger of forever muddying the historical record.” I was especially intrigued by the phrase “muddying the historical record.” If GAI can indeed generate historical-looking records—and the internet becomes flooded with this material—then how are we going to trust the archives?
“Welcome to the rabbit hole,” Stephanie Jenkins, another member of APA, says. Antell adds, “Very soon, there’s going to be a huge deluge of AI images. Local news [broadcasts] also have smaller budgets. What happens when they start generating footage?” Since 2022, AI has produced 15 billion images. It took photographers 150 years to produce that many.1 The first deluge has already come, but there will be more.
Speculative future scenarios start floating in my mind. Maybe the digital will not be a form of archival storage anymore; maybe we will keep the original sources as physical media, in a form that AI can’t corrupt. Maybe we will implement blockchains or other secure recordkeeping systems to keep track of the original materials. Recently, the Pakistani former prime minister, who was imprisoned, generated his campaign speeches using AI. This GAI footage influenced voters and presumably the outcome of the election; paradoxically, the GAI footage itself is now part of the historical record. This further complicates the matter.
To chart a path forward, APA is writing a set of draft guidelines for best practice and guardrails and expect to present it at IDA’s Getting Real in April.
Even if we manage to put guardrails on the use of AI in documentary, it’s being increasingly—and easily—used to create imagery for commercial uses. Nicolas Neubert, who works as creative staff at Runway, tweeted a promo video he created for a fabric brand. The 39-second video includes a lot of beautiful and flashy photo-realistic imagery of pillows in different settings. He wrote that the video took one hour to make.
Of course, there are no actors or complex movements—but still, the footage looks well lit and well composed, and for now that is good enough to attract attention and create advertisements. We will probably see a lot of commercial productions switching to AI in the next few years. But what will the counter-AI aesthetic be? What will the new “authentic” visual language be?
Here’s an optimistic thought: Commercial imagery is in more immediate danger of being replicated. Owing to the working logic of AI, what has been created most is in danger of being replicated. “Midjourney prioritizes commercial imagery” Alan Warburton says in his insightful video essay, The Wizard of AI. Packard shares a similar insight: “If you want to generate any aspect of a famous film, it’s fascinating to see how [well] AI can generate it. The more famous it is, the better it will do.”
What GAI might set free is avant-garde imagery. People will still make images that are unseen—that are unexpected and rooted in their time—either with AI or without AI. The commercial imagery of today, on the other hand, might be easily replicated and become less and less valuable. It will come down to the question, What images are hard to make?
“[AI-generated images] are never going to replace the subtleties of real actors and reality of live-action film,” Packard believes. “There’s no replacing the real deal.” I start thinking about experimental filmmaker Peter Rose’s 1981 film The man who could not see far enough. In the film, Rose climbs the Golden Gate Bridge with a 16mm camera in hand. The scene, lasting around 10 minutes, is breathtaking. This will probably never be replicated by AI: having a body, being mortal, being vulnerable. AI might force the new images to have a stronger relationship to presence, to time commitment, to being in flesh, whether created with AI or without it.
1 I learned about this from Alan Warburton’s insightful video essay The Wizard of AI. ↩