BackBack to selection

Emergence

New work in new media by Deniz Tortum

Subtle Inconsistencies: Filmmakers and Generative AI in 2024

An AI-generated image of people standing on a cliff under black clouds.A Tree Once Grew Here

by
in Columns, Filmmaking
on Sep 18, 2024

,

In Orhan Pamuk’s novel The Black Book, there’s a story about a mannequin maker and his underground workshop. The craftsman believes that after the introduction of cinema, people began to lose their natural gestures and now simply imitate the movements and behaviors of actors they see on the big screen. To preserve natural and native mannerisms, he undertakes an immense archival project: He makes mannequins of people performing small gestures in great detail.

I’m curious what the craftsman would do faced with generative AI.

AI film festivals and competitions are growing in popularity. Last May, the second annual Runway AI Film Festival took place in New York and Los Angeles. The first edition of AI Film Fest Amsterdam took place in June at the Eye Filmmuseum; the first Reply AI Film Festival recently held screenings at a Mastercard event within the Venice Film Festival. Established festivals have also introduced AI sections. In partnership with OpenAI, this year’s Tribeca Festival hosted SORA Shorts, a program that commissioned five short films created using the generative AI tool SORA by directors such as Nikyatu Jusu and Ellie Foumbi. Bucheon International Fantastic Film Festival (BiFan) started an AI competition section, and Locarno Film Festival screened a feature this year, Telepathic Letters, whose images were generated by AI.

Festivals and art institutions never like to lag behind technological developments, while corporations routinely use festivals as tools for marketing and research, either by sponsoring sections within existing festivals or organizing their own. Worlds mix in these emerging media sections: For example, the jury of Runway AIFF includes artists, directors and producers, as well as representatives from Documentary+, Coca-Cola and NVIDIA. This blend signals money and opportunity, especially to creators seeking funding. These opportunities to explore new technologies can be productive, but they are almost always short-lived, falling short of creating a scene or truly helping artists to hone their craft in the new technology.

What these festivals are good at, though, is creating temporary labs, screenshots of a moment in time that crystalize and memorialize developments in the field. Watching the finalist films of Runway AIFF, AI Film Fest Amsterdam and Reply AI (most of which are available online) gives a sense of a possible future but also reminds me of past artistic experiments with new technologies.

In 2015, when I was working with VR at the MIT Open Documentary Lab, we got an Oculus Rift. Using this noncommercial headset and the open-source SDK for the Unity game engine, we built many experiments—most of them physically uncomfortable. We’d code a simple scene so that when the user turned their head to the left, they would see what is on their right in the virtual world; when the user stood up, their body would move down; we’d show the user separate images in each eye. We were trying to reconfigure how the body interacts with its environment and modify the affordances of the human body. These experiments were discomforting, but that wasn’t the point. We were trying to find the boundary conditions of the medium to figure out how this one is different from what came before. However, our experimentation was stymied when the commercial Oculus Rift was released. The Oculus app and new SDK for Unity made it much harder to create experiences that reconfigure bodily affordances: the SDK-enforced UX of the virtual body required it to imitate the physical body as a way of providing comfort to the user. Short-lived experimentation was followed by a period of standardization; the medium had normalized.

I was reminded of that trajectory when watching this year’s AI festival films. Last year, reviewing the first edition of Runway AIFF in this column, I asked whether AI can be a new kind of a camera, creating its own language and type of film(s). However, this year it felt like most of the films used AI as a tool to make independent films and animations that resembled higher-budget counterparts. This is not a bad thing at all. It might be less exciting for me personally, but it signals that AI tools are rapidly becoming part of the industry and being integrated into the production process. In my correspondence with the filmmakers, this was also the biggest thread.

The winner of AI Film Fest Amsterdam, Dragged Holidays is a fast-paced film about a queer performer visiting their conservative family for the holidays, which plays like a high-budget narrative with the addition of AI artifacts that comment on questions around identity. The filmmaking team utilized Midjourney for image generation, Photoshop for refining frames, Runway for AI-drive animation, ElevenLabs for voiceover and Epidemic Sound for the soundtrack. Without AI tools, Dragged Holidays would have required a much higher budget. In our correspondence, co-creator Paula Fernandes emphasized this: “We’re living in incredibly exciting times, where we can not only imagine good stories but also bring them to life in ways that were previously out of reach for small, budget-limited projects.”

A Tree Once Grew Here, a finalist at Runway AIFF, is a beautifully crafted mixed-methods animation about the degradation of the environment. When I first watched the film, I was unable to tell how AI was used in the process. Director Johnnie Semerad kindly explained his team’s workflow, which started with a traditional CG approach: storyboarding, creating animatics, building backgrounds. They manually constructed the background environments, but everything changed when they began to use Midjourney and Runway to generate and refine video backgrounds. “What once would have taken a week to create now took mere hours. Suddenly, I found myself producing three intricate backgrounds daily, revolutionizing our workflow,” says Semerad. “Projects that once required years of saving and cultivating industry goodwill can now be completed in a matter of months. We’re now venturing into television production—a realm that was financially out of reach just two years ago due to the sheer volume of animation required for a series. […] We’re on the cusp of a renaissance in animation.”

Animitas, another Runway AIFF finalist, tells a fictional story through photographs of roadside memorials in Chile. The film’s ambience and minimal use of AI creates an unusually serene and peaceful tone. Director Emeric Leprince doesn’t think AI will become a genre of its own, but that it will be integrated across all areas of filmmaking, he says, by “assisting cinematographers, script supervisors, VFX supervisors or directors.” While making the film, he had “complete freedom to modify individual elements without needing to go through third parties.” All the creators I talked with voiced a similar sentiment: AI makes their process much more frictionless.

AI might be changing the production process in more structural ways as well. Using a term native to software development, Leprince commented that AI is a very “agile” tool. AI filmmaking has come to resemble both software development and animation production. The strict order in live-action production of writing, shooting and editing can be reconfigured with the AI tools: one can shoot, edit, shoot, edit, write, edit, shoot. Consequently, convergence between the production methods of live action, animation and software would require directors to also learn product manager skills.

Although less common, some films are using AI tools not to change production methods but to modify cinematic space and rhythm. Two of Runway AIFF’s award winners, Get Me Out and e^(i*π) + 1 = 0, experiment with new spatial aesthetics. They both utilize neural radiance fields (NeRF), a 3D reconstruction technique based on deep learning. Stitching together NeRF scans of different places and scenes and exploring them through a virtual camera, these films create a space suspended in time that expands, connects with other places and is reconfigured virtually. The effect is like a “bullet-time” scene in 6DOF (degrees-of-freedom), where the camera can move anywhere without any regard to gravity while freely changing the focal length, exposure, depth of field and focus. These moments render the physical world virtual—the gap between a documentary and Transformers is closing.

Another aspect of AI that filmmakers are using creatively is the inconsistency of AI image generation, such as frame-to-frame changes and scenes not following each other in perfect unison. In Dragged Holidays, the team uses subtle inconsistencies in character design (their faces slightly changing from moment to moment) to represent the variety of personas of the characters, thereby enhancing the film’s overall theme. The use of inconsistency and overwhelming imagery of generative AI is core to the visual design of one of the finalists of Reply AI Film Festival, Hint. A mix of virtual production, photogrammetry and generative AI, the film starts in a destroyed city and flies through a cityscape filled with images of destruction,
opulence, consumerism and popular culture. Watching it feels like doomscrolling at 1000x speed.

The impossibly fast pace of Hint makes me think of my own relationship with AI imagery. When watching AI films, even slow-paced ones, I have a hard time following their stories. I suspect that’s due to the inconsistency and generic quality of the images, which don’t always speak for themselves. Is it because these images are not only visual but are also text-based prompts—language first? That they refer to a body of scraped images that they are trained on? The images ask to be translated to some sort of language to be decoded, like bits. However, the limits of my human attention might not be able to decode this language. So, I am left with a new relationship with the images of inconsistencies and ruptures.

In his Norton Lecture at Harvard University in 2018, documentary filmmaker Frederick Wiseman said that he is not making films but building a large catalog of human emotions and behavior. AI, on the other hand, takes such catalogs and averages them. What is consistent in generative AI imagery is the human face—the emotions, the gestures—which are generic, over-the-top and resemble one another. Wiseman and his work remind me of the mannequin maker in Pamuk’s The Black Book, preserving the visuals of our current world before it is filled with infinite generative imagery. In one of the futures of cinema, there is a world where making films in the real world is a romantic, futile but noble act.

© 2024 Filmmaker Magazine. All Rights Reserved. A Publication of The Gotham