Go backBack to selection

Elements of Oz: Producing Live Video and Interactive Theater

When MGM undertook to produce a film adaptation of the book The Wonderful Wizard of Oz in 1938 they wanted to use all the newest technological tools — think Technicolor — and special effects wizardry that they possibly could to bring the fantastic story to life. Equally, when the Builders Association decided to make the film the subject of their latest play last year — Elements of Oz ran Off-Broadway throughout December — they did the exact same thing. But for an innovative theater company in 2016 that meant integrating live video production, online clips, and a multitasking phone app into the onstage proceedings.

New media isn’t exactly new territory for the group; they’ve been integrating video and technology into their practice for years. For instance, House/Divided, a 2011 adaptation of The Grapes of Wrath, featured documentary footage projected on the walls of an actual home that had been foreclosed upon during the 2008 mortgage crisis, typical of the ways the Builders utilize video and recorded sound in their productions. Elements of Oz, directed by Marianne Weems, uses these same elements while pushing even further into the realm of augmented reality (AR) via an app downloaded by every patron on their phone.

The show is aptly titled, as it presents essentially a nonlinear nonfiction staged essay through elements of Oz — the story, the book, the 1939 film’s production, and its place in society since, including excursions into thinkers like Ayn Rand — rather than a traditional retelling of the plot.  Three main actors — Moe Angelos, Sean Donovan, and Hannah Heller — took turns embodying narrators as well as all the characters in the story (throwing gender to the wind). Scenes and vignettes flitted across the stage, sometimes with more than one happening at once, leaving it largely to the viewer to mentally reassemble them into a linear whole. But rarely was an actor onstage alone: just as often, a video crew recorded a performance, overhead monitors played archival and prerecorded original video, or the audience members’ phones added to the display. The AR in the latter cases is typologically akin to the huge plastic sets and matte paintings of the 1939 film: by holding up our phones we could see a forest of Munchkinland flowers, a swarm of flying monkeys, or a gentle snow sent to end the magic of the soporific poppies, all superimposed on the actors onstage or the people sitting around us. The video segments were also strikingly interesting from a production standpoint: shots were recorded out of order and sometimes several minutes apart, including individual lines, reverse angles, cut-aways, and of course individual lines of dialogue. But at several points in the play the onstage action would pause while the hanging monitors showed the footage, now assembled in the “correct” order, that now “properly” showed Dorothy’s interactions with the Scarecrow, the Wicked Witch, Glinda, or the tornado. These scenes, assembled at lightning speed (we discuss their editing below), provided humor and whiz-bang awe at how they were created before our eyes without our even realizing it, but at a deeper level the play through them was asking which version was more real: the disjointed live film production that occurred onstage or the reassembled scene that told — or seemed to tell — a coherent narrative?

Indeed, for me the entire production was heady stuff, calling to mind Jacques Derrida and Jean-Louis Baudry (with a heavy helping of Bertolt Brecht) as much as L. Frank Baum and Victor Fleming. I’d just seen The Wizard of Oz projected at the United Palace in Washington Heights a few days before attending the play so the film was fresh on my mind, and the overall effect was to deconstruct the film, laying bare its parts as the product of an industrial process and exposing the instability of its meaning as we’ve interpreted and reinterpreted it in the seventy-seven years since its release. The augmented reality was entertaining, the video segments impressive and occasionally hilarious, but for me the most ideologically telling moments of the production came whenever the audience’s phones simultaneously lit up in a chorus of videos culled from YouTube: to hear dozens of versions of “Over the Rainbow” all at once was simultaneously cacophonous, sonorous, and insightful about the many uses to which the public has put the film over the years, from the innocent little girl singing away on my phone to a nightclub chanteuse on a phone in front of me to the Mormon Tabernacle Choir intoning the song on the phone of the patron next to me. In an age when our entire existence is shaped by social media, this YouTube mosaic spoke as much about the cultural ramifications of the song as about the splintering of contemporary society into clips and memes. And it is in the exposing of those elements, as much as in the shots and editing that make up the elements of film production, that Elements of Oz best expressed its own voice.

Although the production has wrapped, Elements of Oz still presents interesting lessons for anyone excited by the possibilities of AR, live filmmaking, and interactive theater. I spoke individually with two of the people involved in the technological side of the production: Jesse Garrison, who oversaw the interactive design and programming of the app as well as assisted with the video, and Austin Switser, the project’s head video designer. Their comments are included below, beginning with Garrison.

Filmmaker: I’m curious about the collaborative process between the traditional theater personnel and those of you creating this new interactive technology. At what point did you come on board and how did the show and the app evolve over the course of its development? Was the finished product roughly similar to how it was originally envisioned or did it change fairly drastically?

Garrison: I came on originally as the Assistant Video Designer. I’ve been working with the Builder’s Association for a few years now in that role, and that’s how I was brought into this production. The first member of the AR team was John Cleater, who’d done an AR effect in a previous show (House/Divided). Since the AR was going to be much more extensive in this piece, and since I had some interest and experience in making interactive work, I slid over to the app development team during most of the process. Towards the end, however, our new Assistant Video Designer left and I moved back into the world of video.

Much of the early process was trying to figure out what was feasible, what was exciting, and where those things overlapped. The first big hurdle was coming up with a cueing system. Most AR applications have static content, usually selected by user input or triggered by marker of some sort. If we had to rely on a system like that, it would’ve really limited our options for the app. So we developed a cueing mechanism built on the native multiplayer networking system of the engine we were using. This gave us the ability to not only trigger augmented reality effects, but any number of behaviors. Starting there, we were able to build in other triggered events, like playing video and audio and sending text. This led to possibly the biggest growth in the app, developing functionality beyond AR — it opened up a lot of possibility.

As the process went on, the app kept growing, as did the team. We brought on two collaborators from Carnegie Mellon, Larry Shea and Kevan Loney. Kevan took on creating original 3D models and animation and Larry worked on the network hardware. What began as an effort to reduce the scale of the show (by limiting the physical build on stage and relying more on AR for scenic visuals) ended up ballooning into one of the larger departments in the production.

Filmmaker: Given how it grew incrementally, can you explain the basic concept of the finished app, its functions, and how it interacts with the onstage performance and video?

Garrison: Sure. The app is meant to serve as another layer of media in the show. One of the initial inspirations for the app was creating an analog for the escapist nature of “Oz” in the devices we carry with us to escape reality on a daily basis. We also were interested in what the contemporary equivalent of Technicolor would be — and that’s where we think AR fit.

The app has two basic modes — one for outside the show, and one for during the performance. If the user isn’t at the show, it provides some information about the show and has some basic AR effects — a storm (to precede the cyclone) and a few simple target-based augmentations that can be used with images from the original book’s illustrations. Once the audience comes to the show, we switch the phones into what we called “active” mode, which allowed us to trigger events based on the action on stage. The phones were connected to a single cueing server on the network, which allowed us to synchronize the behavior for the entire audience. At various times, we’d play video (like our “Somewhere Over the Rainbow” YouTube chorus) and audio clips or send text and images. In several key scenes, we would trigger augmented reality scenes, layering over the view of the stage with CG elements, like a tornado, blossoming poppies, flying monkeys, etc. It allowed us to create visual elements all around the viewer, too, not just between the viewer and the performers.

Most of the time, the cues reacted to the action on stage, just adding another layer of mediation to the audience’s experience, but there are a couple sound cues that the performers respond to. We worked closely with the sound designer, Dan Dobson, to make sound files that would sound good on phone speakers, as well as sound good when played on 100 devices at once. We also made groups of sounds that would work together when played simultaneously, then had the audience’s phones play them randomly.

Most of the collaboration process was spent figuring out how to balance the action on stage with that on the phones. Distraction was a part of the concept, but we didn’t want to go overboard on it and lose the audience’s attention. When to use what functionality, how often, how much, etc. It took a lot of finessing, but I think we got there eventually.

Filmmaker: Was there any aspect of the app that was particularly difficult to create?

Garrison: The hardest part of the process was the different timelines on which theatrical development and software development happen. The Builders, in particular, have a process that depends on constant improvement and tinkering. Scenes and structure would evolve on a daily basis during the rehearsal process. Even once we got into our run, we were changing scenes in pretty significant ways. This is pretty normal, and is one of the most exciting parts of the process for me, but when you’re working with software, you run into some challenges: first, little changes can have big ramifications — everything needs to be tested again before publishing. Second, it’s much less obvious to collaborators what’s a little change and a big one; as the functionality grows, so does the interdependence of the elements. Lastly, publishing to the app stores is a process, particularly for iOS — review times before a submitted app can go live vary from a day to a week. Any changes may not be available to users for quite some time, and when they are, you have to make sure that everyone has the most recent version, etc. All of this slows down the iteration rate — it make it significantly harder to experiment, which is pretty fundamental to the devising process in theater.

The hardest part of the app itself was trying to orient the direction the phone is looking to the stage. Sounds easy, but it was definitely what I spent most of my time on. The first version of the show relied on an image target above the stage that positioned the virtual camera in the scene to correspond to the viewer’s physical position in the audience. It was a pretty clunky system, so we did without it in the New York run. We tried using devices compasses to orient their phones, but it proved difficult and unreliable. We ended up doing a couple tricks behind the scenes to fake the orientation, but we also altered the content slightly to not depend on exact orientation as much.

Filmmaker: Similarly, what was the most rewarding part of the show for you?

Garrison: By far, the best part of the show for me is the team. It’s a fantastic group of people, all extremely talented, motivated and passionate. Their commitment to the process and product was a constant source of inspiration. Second to that, from my point of view on stage, I was able to look out and see the audience in the darkness. When the phones would activate, their faces would light up with this wonderful soft glow — I loved  being able to see their reactions.

**

Filmmaker: The final project included a mix of archival, prerecorded, and live video. How did you go about mixing all of these — and the live performances — into a coherent whole?

Switser: In Elements of Oz I think the idea of a “final project” is actually the performance that happens onstage. The video content is entirely based around what happens during the performance and the making of that content is actually the performance. Often times when lines were dropped, confused, or flubbed it made the video content less coherent, but it actually made the performance more coherent. It was actually more interesting when we saw the process at work. If it was too perfect then we lost a certain element of liveness, which is what I believe made watching an abstracted idea of a film shoot interesting to watch on stage.

Filmmaker: Several times the onstage actors would act out certain lines from The Wizard of Oz, out of order, and in a matter of minutes you had them edited into completed scenes; the reveal of these completed scenes was usually just as satisfying for the audience as the performances themselves. Can you talk about your editing process in reconstructing these new video segments live for every performance?

Switser: It’s a bit tricky to explain, but basically each shot was recorded as a numbered video file. We developed the programming so that we could shoot each line out of order and then it would play all the files back in a different order. The editing was done automatically. Once we got the programming done it was really fun to play with. We could try different ideas and play around with lines and camera movements and then instantaneously watch the edit. It reminds me of when I first picked up a video camera and would rely wholly on in-camera edits. If you missed a line, too bad — you get one shot and then it’s on to the next. It revived a sense of playfulness in my work that I think I have been missing recently.

Filmmaker: Filmmakers are increasingly collaborating with coders, software developers, and app designers. To what extent did your work overlap with the people creating the app? What was the collaborative process like with them — and with the theatrical artists?

Switser: This is a bit of a unique situation because the “app developers” were actually “theatrical artists.” We all collaborate very closely and we all worked together to figure out how to make it all work. No one really knew how to do any of it when we started. There were many times in the process where the answer was something none of us had ever tried before, and I think that’s why we were all so engaged in the work. It’s just a group of incredibly talented and curious folks who love a good challenge.

© 2024 Filmmaker Magazine. All Rights Reserved. A Publication of The Gotham