Go backBack to selection

Artistic Outputs: Filmmakers and Production Designers on Using Generative AI

Concept art from Wrongdoer, Natou Fall, image from Midjourney

“An ominous slender figure in the foreground, a gay couple kissing in the distance, alleyway, two-point perspective, at night, a bar crowded, 8K, cinematic, cinematic composition, in the style of Jean-Luc Godard, rendered in the Gaspar Noe engine.”

Via Zoom, Natou Fall shares her screen with me, allowing me to look at the hundreds of images she’s created using text prompts in the generative AI program Midjourney. The image resulting from the prompt above is an eerie one of a silhouetted couple holding hands, both wearing fashionable flared jackets and standing in a sparse, neon-accented nightclub with a figure lurking in the shadows. The whole image is bathed in an orange-red aura that is indeed reminiscent of Noé’s work, particularly the lighting scheme of the famous tunnel sequence in Irréversible.

Fall scrolls down her wall of images in the Midjourney Discord, explaining that they’re being created for use as preliminary storyboards for a film project in its beginning stages called Wrongdoer. She’s using the images to explore different visual styles, character sketches and production design ideas. The “Jean-Luc Godard” style prompt results in some minimally furnished but quite lovely apartments, such as ones you might see in a Godard picture like A Woman Is a Woman. 

Fall zooms in and out of individual images, explaining how she’s added suffixes to her prompts to force Midjourney to process them using its latest beta, and she shows me some hilarious misinterpretations of her instructions. “I tried referencing Agnès Varda,” she tells me, “but instead we get Agnès Varda in the space, rather than references to her film work.” (Squatting in the rear of one of those apartments is, in fact, a short woman with a bowl haircut.)

For Fall, a Paris-born, Senegalese-American interdisciplinary artist and creative director currently on the faculty of architecture school SCI-Arc in Los Angeles, exploring the possibilities of programs like Midjourney has been a natural outgrowth of previous work using AI text generators. For a recent short, Drinking Seawater, she fed descriptions of her dreams into an AI text GANS (generative adversarial network) and married the resulting outputs—“really strange, kind of profound-like statements… freeform, simultaneously coherent and incomprehensible”—to imagery of the harbor at Long Beach. She discovered Midjourney when hanging out with a friend “who was obsessed with it. It has this notification sound that is very particular, and I was like, ‘What are you doing on your phone all day?’” When asked to pitch a microbudget short by her production collective, she decided to use the program to help develop its story through visuals. “I’ve been using Midjourney to create the storyboards—kind of building the world, the characters and the environments,” she says.

Accessible through Discord, Midjourney, which launched to the public as an open beta in July 2022, is one of several programs harnessing the power of AI, neural networks and large data sets that is seizing the interest of creators of all sorts. Others include DALL-E and Stable Diffusion, and collectively, they’ve created both excitement and a bit of panic among creative classes. There are, as expected, differences across the various platforms. Midjourney is seen as more “painterly” in its outputs and, along with DALL-E, has more restrictive content rules, limiting the creation of violent, sexual and trademark-infringing imagery, whereas the open-source Stable Diffusion does not. 

In the few months of its public availability, Midjourney has been used by architects in a speculative design workshop to reimagine the island of Capri, design book and magazine covers (including, appropriately, the cover of The Economist’s AI-focused June 11, 2022, issue) and create theater design concept art, digital art competition winners and fashion prototypes. And, as Fall demonstrates, it’s being used already in very practical ways in the film, television and new media worlds.

<i>Floating City<i> by Tino Schaedler created in Midjourney

Tino Schaedler is a production designer, art director and architect whose work spans feature films, music videos, and commercials. “There are moments in life that we remember,” he says via Zoom, “and, for me, my first experience with Mid-journey is one of those moments. It was a revelation.”

Schaedler began in the business as an art director, working on such films as Harry Potter and the Order of the Phoenix and Charlie and the Chocolate Factory. At the time he worked on the latter film, he remembers, “Everyone was drafting by hand. I had come from architecture and had just finished an additional post-graduate diploma in visual effects. So, I knew Maya, the high-end modeling software, and no one in the art department knew any of those tools. Back then, in 2003, sets were being fully built in 3D instead of digital matte paintings. I can sketch pretty well, but I was never the one able to do super photorealistic illustrations. So, feeling the power of 3D software, which allowed me to create stuff I never would have been able to draw by hand, was amazing.

“When I came to the States in 2008, that was when filmmakers started applying game engines and digital camera tools [in] the art department,” he continues. “So, I’ve always been into exploring how these tools in the tech area can help push the way we visualize ideas and design.”

For Schaedler, programs like Midjourney offer radical new possibilities when it comes to workflow, volume and creativity. “In film, you have an art department and three or four illustrators,” he says. “You have a dialogue with them, send them out, and hopefully they come back tomorrow with something that really adds to the [ideas] I give them. I give some notes, and they come back again—it’s kind of a back-and-forth dialogue. With Midjourney, I feel like I’ve got 300 illustrators working for me, and within 10 seconds each illustrator comes back to me with stuff that’s unexpected and that I never would have come up with myself, whether that’s in terms of scale [and] complexity or in terms of mistakes and different directions.”

Like many using these tools, Schaedler has developed his own personalized workflow and favorite prompts. “It’s almost like Midjourney is the first tool and DALL-E the second,” he says, “then I’ll either go into 3D modeling or working with an illustrator.” Midjourney is good for “that creative spark—something outside of what you might have been thinking,” whereas DALL-E allows Schaedler to iterate and create larger images. For example, from a prompt calling for “Blade Runner meets Aztec pyramids,” he’ll take two or three images produced by Midjourney and create “a whole city, a bigger-scale painting from it” in DALL-E. 

Gray, concrete brutalist structures are illuminated by a ray of white light emanating from above.
Griffin Frazens brutalist buildings image from Midjourney

If Schaedler is particularly interested in using generative AI for giant projects involving world-building and the metaverse, production designers working on more earthbound projects are finding these tools useful, too. Judy Becker, the Oscar-nominated production designer of Brokeback Mountain and, this year, Amsterdam, is currently working on Brady Corbet’s new film, The Brutalist, a drama about a Bauhaus-trained architect who was sent to a concentration camp during World War II. For a sequence showing a retrospective of his work, Becker says architecture consultant Griffin Frazen used Midjourney “to create three Brutalist buildings quite quickly” by using references to key figures in the movement along with other architectural terms. “Now I will have these digital prints redrawn by an illustrator to create mythical buildings.”

Another production designer working on an independent film currently in post spoke of an inspired use case for generative AI images: creating faces for set decoration photos. As any producer knows, clearing background photos can be very difficult, as copyright could rest with an entity or an individual photographer, and privacy issues could also be involved. For a set of magazine covers, this designer created perfectly realistic faces using a generative AI program and comped them into original designs.

But not everyone who has been exploring generative AI programs is sold on their use, particularly when the images are used directly and not just as concept art. Richard Toyon is a production designer whose credits include the TV shows Barry, Silicon Valley, Reservation Dogs and, most recently, Winning Time: The Rise of the Lakers Dynasty. He says he hasn’t found generative AI as useful as he expected he might. “As a physical artist,” he says, “I paint, I draw, I do all of those things, and I often feel that I can generate images as fast as, and more nuanced than, AI. I’m maybe holding out more hope for the future as more nuancing is available.”

“As a production designer, one of the things that we have to do so often is get across very specific story points [with our designs],” he continues. “On the last show we were on, we just couldn’t get the [AI art] close enough to where it needed to be. Revising it, giving it the slight twist it needed was kind of out of the bounds of the AI we were using. It was just easier to create the artwork ourselves and manipulate it accordingly.”

On Winning Time, one of Toyon’s team members tried using AI to create set dressing background art in the style of a well-known ’60s artist. Using one of the generative AI programs, they created four images, “and there was a lot of buzz in our large art department about it,” admits Toyon. “We had to have a meeting, and there were several people who felt kind of guarded about it. Our graphic designer felt he could have created that same art, just better-looking and more nuanced, in similar amounts of time. So, we decided that, at this point, we’re not going to use it.” 

On Reservation Dogs, Toyon’s art department had to create a logo for a potato chip truck, the Flaming Flamers, and despite the fact that you’ll see many online using generative AI to create logos, the process would not have been productive, or at least certainly not at that time. That’s because, again, Toyon says it was simply too difficult to create the level of nuance required. “I have to be very precise,” he says. “I can’t make something too comedic or bizarre. Sometimes, it just has to sit back and not be part of the texture of the background or the foreground. If it’s too hot, odd, colorful or weirdly shaped, then it’s not going to work. Our graphic designer did the design, and it was perfect.”

In discussing his various attempts to incorporate generative AI works into his television series, Toyon says he’s been careful to have conversations with studio legal departments, who have in some cases approved images from one generator as opposed to another, most likely having to do with details of their respective terms of service. This speaks to what attorney Ryan Abbott, of Brown Neri Smith Kahn, LLP, dubbed the “wild west” aspect of generative AI use and copyright law in a November 2022 Bloomberg Law article. The U.S. copyright office requires human authorship to register copyright. “The Office will not knowingly grant registration to a work that was claimed to have been created solely by machine with artificial intelligence,” it said to Bloomberg in a statement. Because DALL-E, for example, includes copyrighted images in its data set, some lawyers predict that claims could target the generative AI programs themselves. Mark Lemley, director of Stanford Law School’s Program in Law, Science and Technology said to Bloomberg, “If you train the AI to make Picasso-like works, or Mondrian-like works, and it makes one that is sufficiently similar, that could be a copyright infringement claim.”

In an interview with Filmmaker, Steven Masur, partner and founder at Masur Griffitts Avidor, LLP, points out an irony: that so-called strike-suit lawyers use computer software themselves to scour the web for unlicensed images, often by commercial photographers, they can threaten lawsuits over. He foresees them possibly going after generative AI images as these images begin to proliferate. “These lawyers are pretty aggressive,” he says. “They would say, ‘This picture of a flower is an adaptation of my client’s work, give me $30,000.’” A Verge headline from November 2022 summarizes the current state of things: “The scary truth about AI copyright is nobody knows what will happen next.” 

Given that the production designers and filmmakers quoted for this article are using Midjourney and DALL-E to create concept art, or art work that will then be revised and transformed by human hand, many of the above questions over copyright registration and possible infringement are obviated. But what of the future? Search the web or Twitter using the phrase “Midjourney v4 vs v3” and you’ll find truly astonishing side-by-sides of images from the just-released beta as compared to the previous iteration. Some of the issues around nuancing and being able to specify detail are rapidly receding. Wrote Wharton professor Ethan Mollick on Twitter, “A less capable technology is developing faster than a stable dominant technology (human illustration) and starting to handle more use cases. Except it is happening very quickly…. Seriously, everyone whose job touches on writing, images, video or music should realize that the pace of improvement here is very fast & also, unlike other areas of AI, like robotics, there are not any obvious barriers to improvement.”

Given the pace of AI progress, do designers need to learn to use these tools? Not necessarily, says Schaedler. “In the beginning, when I was working with 3D modeling software, I was working with Alex McDowell [Minority Report] and Dennis Gassner [Blade Runner 2049], both big mentors of mine. They each had their own way of getting into a project and communicating their vision. For example, Dennis is a very verbal man, and it’s all in the way that he describes things. He has a researcher, and it all happens through communication, looking through books. So, I don’t think that it’s necessary to know these tools to create amazing production design.”

But, he continues, “Do they empower certain people? Yes. I can say without any ego that my images, and the images of the people who work for me at my company, have a different level because of the design sensitivity we have and because we’re able to tap into this.” Similar to comments made by Fall, Schaedler says he gets into a flow state as he bounces prompts into Midjourney, constantly tweaking a phrase. He suggests that designers hone their textual literacy by playing around with their own prompts and searching out the websites and work of designers like director, producer and VFX artist Olaf Blomerus, who often publishes the prompts used to generate his eye-popping science-fiction imagery. (Tipping one of his own favorite prompts, Schaedler says, “‘Neon lights illuminating the horizon’ always creates amazing, unexpected lighting.”)

“If there’s a painting you like, describe that painting and see where it leads you,” recommends Schaedler. “Experiment and try things out. For me, the frustration and satisfaction that comes from working with [Midjourney] is kind of like being in art school and learning to work with clay or illustration.” 

With the image quality of generative AI leaping forward in quality every few months, and with the law (as well as, presumably soon, the various art department guilds and unions) racing to catch up, this technology may be, at the moment, in the same sort of creatively intriguing liminal space as consumer-grade video technology was in the late ’90s, when the smudgy, lo-fi image of miniDV was used to make works in which its quality limitations were used to great aesthetic effect. Even in v4, if you type a prompt into Midjourney, you may get something quite similar to what you are requesting or something quite different, and often productively so. At a moment when the AI is performing its own inscrutable calculations, there’s a dreamy quality to the imprecise images, a variability that, as both Schaedler and Fall extoll, can be creatively fruitful in terms of conceptualizing not just image but story and character.

Says Fall about her work with Midjourney on Wrongdoer, “It’s hard not to be influenced by it. There have been moments when I’ll put in a prompt and then I’ll get this strange image, and that actually changes what we understand about the character.” For example, Fall wanted a killer’s lair to be full of “bloody plastics, like a Dexter vibe, you know? But there’re a bunch of words that are banned in Midjourney, like ‘blood.’ So making images that are scary or gory you have to go around it. ‘Bloody rags’ turns into ‘red wet rags.’ But the way it was producing [these images], I was like, ‘He needs to work in a clothing place, like an industrial laundromat where they custom dye fabric.’ An image of hands in red water made me think he needs to be a textile worker.”

“It’s definitely a fight between me and the machine,” Fall laughs. “I always have to check in with myself to make sure I’m not just writing the story off of the images that are being produced, even though they might inspire bits and pieces. I’m really cognizant of keeping the overall story arc and the characters, their motivations and psychology the same.”

Concludes Schaedler, “I love that there are all these different [tools] that allow my creative [mind] to trigger. Each of us needs to find the tools that work for us, and I think in the future there will be new mediums where this technology is going to be truly empowering.”

© 2024 Filmmaker Magazine. All Rights Reserved. A Publication of The Gotham