BackBack to selection

Emergence

New work in new media by Deniz Tortum

A New Kind of Cinematograph: New Filmmakers at the AI Film Festival

A small boy in a bathtub along with several small dogs, some with glowing red faces.Expanded Childhood

We see a childhood photograph nearly centered in the frame on a black backdrop. In the photograph, a boy of three or four years of age smiles inside the bathtub with his two dogs. A few seconds later, the black screen around the photograph becomes part of the photograph; the image expands and the rest of the room is revealed. Everything looks familiar and slightly off at the same time. The tiles are slightly skewed, as if the wall has melted. The shape of the bathtub looks normal but perhaps larger than usual. There are suddenly three more dogs in the bathtub, dogs that uncomfortably tilt slightly more towards “weasel.”

Next, we see another photograph, again placed in the middle of a larger frame, leaving black screen on the margins. In this photo, three children pose in front of a fireplace in their Halloween outfits. The child in the middle is clearly a firefighter; the other two are also in costumes, a cheerleader and a cat burglar. The image expands once again, and the black screen is filled. Now there are two more children on the left side of the photograph. One of them wears a blanket to become a ghost with a strange narrow face. The other one is probably a pirate, but his face is blurred, his eyes blank; maybe he is a ghost without a blanket. Five carved pumpkins and a black cat also appear in the room, their slightly indeterminate faces eerily similar to the child in the pirate costume.

These scenes of expanding photographs appear in Sam Lawton’s Expanded Childhood. The film uses an artificial intelligence technique called “outpainting,” a version of which—“Generative Fill”—was recently added to Adobe Photoshop. The AI image model analyzes the filmmaker’s childhood photographs, predicts what content could be just outside the frame and summons the world back into it. These photographic memories are expanded through the use of AI, but as they expand they also become more generic and lose their specificity. They become part of a larger whole, the average of the past. 

Expanded Childhood is part of the AI Film Festival (AIFF), organized by Runway ML earlier this year. There are 10 finalists in the slate, and they can all be watched online. These films use AI tools and techniques but also reflect on AI as a phenomenon. I found it refreshing to watch this selection because it provides examples of filmmakers using AI as a new filmmaking language at a time when AI hype is a storm that feels impossible to chase. 

Twitter is overrun with threads and articles about how AI will change every industry, how we live and how we interact. Every day, “10 huge things that happened in AI this week” lists are published to viral popularity, followed by memes reacting to the absurdity of these lists. But this dynamic also affects people offline and runs the gamut from negative to perhaps unrealistically optimistic: I have a friend who wondered to me whether it would be worth writing novels after ChatGPT. Another friend is very excited about the prospect of never watching a bad movie again; he hopes that he can go to his AI TV and say: “Generate me a sci-fi film similar to Alien.” Even more personally, my partner and I are starting to think about how to raise our child in the time of AI. Will we have to change what type of skills we value and encourage her to learn? 

Sentiments are not that different in the film industry. Recently, at the Cannes Film Festival, Sean Penn championed regulation of AI. The Writers Guild of America is demanding limits be put in place on the use of AI in scriptwriting. Currently, AI is being pitched as a replacement for many departments in filmmaking, from writing to production. Some examples: You can co-write scripts using large language models and tools such as Google’s Dramatron; you could put a script into a tool like Largo.ai to predict a return on investment and greenlighting possibility; you can create storyboards using image generators, such as Stable Diffusion; you can create faster and more cost-effective VFX solutions using AI tools for inpainting and compositing. In the near future, whole films may be generated and customized using AI to the delight of Twitter thought influencers, a delight that seems seasoned with some schadenfreude as they see other creators bent to the mercy of algorithms. As usual, the goal of the film industry is to lower the cost and speed up the process, eliminating as much human labor as possible—in other words, keep doing what we were doing but cheaper, faster and with even more data. 

Writing about AI right now feels like live-tweeting from an event; every observation seems at risk of being outdated right after it is published while even more tools emerge to streamline existing processes more accurately and effectively. The more enduring questions might be related to what AI can do in filmmaking that wasn’t already being done. What if AI can be its own department in filmmaking? Asked differently: What if AI is more akin to a camera—a new cinematograph, a device in its own right for creating moving images, with its own affordances, techniques and language? 

We see hints of this in the AI Film Festival. A striking selection, Laen Sanches’s PLSTC, fills its short (one minute and 38 seconds) runtime with hundreds of scenes of sea creatures, each one dead and stuck in plastic. All these sea creatures are created using Midjourney, an AI image generation tool. The species extinction caused by climate change mostly goes unnoticed, taking place in environments beyond our immediate perception. PLSTC generates the countless animals that are killed by pollution and plastics that are never seen by a human eye through computer technologies; the film uses AI to visualize the unseen harm we cause. 

This method is not that different from our current tools for understanding and perceiving climate change. The climate crisis is primarily revealed to us via simulations; it is understood through our tools of measurement, collected weather data and computer simulations that provide probability about future events. We rely on computational media to predict the future. Similarly, we can use AI to make films about the future, which has already started forming.

In Jordan Rosenbloom’s Original Voice, a young filmmaker sitting in front of a computer is faced with an empty prompt box. Below the prompt box is a button that reads “Generate Film.” Anything can go into that box. Faced with infinite choices, the filmmaker writes “create a short film with an original voice.” Afterward, the AI goes on a journey of image generation, a stream of (artificial) consciousness. We see images that pertain to the word “voice” in all its different meanings: inside of a larynx, boomboxes, opera hall, civil rights movement, a bird flying above an urban beach. 

The prompt generates scripts, images, worlds. Filling the prompt box also has a new name: prompt engineering. As a skill, prompt engineering is about talking successfully to the AI model. This requires knowledge of the image world it is trained on and knowing how one can ask the right questions to change its production as desired.

Shan He’s I want 1000 Rabbits also uses a prompt box as a central element. As the text in the box changes, the images in the background change as well. We see constantly changing “happy rabbits,” “vegetables and meat, for hotpot,” “gifts,” “snacks,” “fireworks,” “friends,” “happy friends who are sitting around table, eating hotpot, in the Chinese new year eve.” The film suddenly cuts to black and ends with a curious and striking text: one might be better. 

What looms in both these films is the feeling that there is a vastness of possibilities in AI. There are infinite decisions, but there is no inherent purpose or meaning. Sitting in front of a prompt box is a magnified version of the blank page: When there is so much freedom, how do we find meaning in AI? If I were to give this feeling a name, I’d call it computational existentialism. The computer can create and generate everything, which results in a flattening of words, ideas and concepts. Snacks, rabbits, friends all have the same ontological weight in these systems; there is no cultural, historical, anthropogenic hierarchy. 

In Landscape, directed by Kyle Goodrich, an infinite pan made possible by AI mixes 360 landscape videos seamlessly into each other. With every 180 pan, we move from one place to another; without even realizing the exact moment of departure, we find ourselves in a supermarket, a basketball court, an arcade and a beach. Landscape flattens geography. We can be anywhere, anytime; places can be sewn into each other; there are no spatial limitations. However, the question about how we find meaning and ground ourselves persists here as well. We seem to be able to generate everything, yet “one might be better.”

This year’s AI Film Festival has a diverse selection, and it is refreshing to see such different approaches curated together. In addition to their use of AI tools, there is a camaraderie among these films. We see the feelings of filmmakers and artists responding to the development in AI. What is becoming of culture work? What is the role of an artist at this time? Why should one keep making films or writing novels in the time of ChatGPT? How does computational culture shape our thinking, hopes and concerns?

AI is more interesting not when it is used to recreate existing work in a cost-efficient way, but when it is used to deal with contemporary questions, to make films about ever-changing technology, the climate crisis, our unknown future. Could the new cinematograph offered by AI be the tool most capable of articulating concepts for phenomena that are currently forming?

© 2024 Filmmaker Magazine. All Rights Reserved. A Publication of The Gotham