This was my 7th show with Theater Mitu and possibly the collaboration of which I am the most proud. When I began with Mitu, the relationship was very traditional: I was a sound designer being hired to design a particular show. This mode has, over the past 10 years, slowly changed for everyone working with the company, myself included. In its place, we’ve found this methodology of creation/curation as individual artists: the company spends weeks creating these “autonomous performance installations”, centered around a certain piece of text, theme, or idea. The installations are then studied and curated by Rubén and the company into a cohesive theatrical piece.
REMNANT was a piece 3 years in the making. Centered around themes of death and the journey towards (and returning from) death, the show was sourced from dozens of interviews with people around the world: soldiers returned from war, trauma doctors, nurses, people with terminal illnesses and their families, mental health professionals, teachers, scholars, and other artists. The piece was an emotionally-charged combination of performance and art installation that studied death across centuries and cultural boundaries.
In terms of design and space, the audience was split into 3 banks of seating, each bank placed in front of a structure. At the beginning of the piece, the three performance structures, each containing 2-4 performers, would begin their individual pieces simultaneously. The audience’s aural experience was delivered via headphones at each seat, corresponding to the performance structure in front of them. After 20 minutes, the audience was invited to get up and move to the next bank of seating and then the pieces would begin again. In this manner, all audience members saw the complete piece, but in 3 different orders, thereby affecting their “narrative” experience.
In addition to my designer duties, I was also a performer in the piece! My track involved a performance on my modular synthesizer (audio sample below) - the first time I’d played my synth for anyone aside from my girlfriend - and singing, which I haven’t done in performance in probably 15 years. The 7-show weeks were an interesting mental shift from my usual practice of “design a piece then move onto the next project”, but the real brain-melter was that for every performance the audience saw, we as performers were running our tracks three times! On a two-show day, we’d perform, in essence, 6 times. It was a real test of focus to keep on-track (even with the in-ear monitors telling you what to do) by run #4 or 5 of the day, especially for someone like me, who isn’t well-practiced in performing!
Here’s where I’ll dig into some of the technical details of the show, so if that’s your jam, read on! As both the lighting and sound designer (and a performer), I had a heavy workload dealing with all the programming for the piece. The sound computer, a Mac Mini running QLab, was the show control “master” for the production. It sent OSC commands to the two projections computers, both Macbook Pros, and to the light board, an ETC Ion. The Ion, which I programmed, ran 4 different cue lists - one per structure, plus a cue list for house lights, work lights, etc. The multiple cue lists allow the three structures to run independently of each other, so I didn’t have to worry as much about accidentally capturing a channel from one structure in a cue for another structure. Attilio Rigotti programmed the two projections computers, running Isadora, which “listened” for cue words that would describe the cues we were entering or exiting, and so I would pass those cue words in an OSC string to the computers at the top of sequences to trigger the projections. The QLab computer itself was using two Roland Octa-captures combined in an Aggregate Device, with 16 inputs: a guitar, a keyboard, my synth, a drum pad, and a whole bunch of microphones; and 13 outputs: stereo mixes to each of the three audience platforms, a general house ambience that fed speakers in the ceiling for pre- and post-show, a subwoofer send, and individual in-ear mixes for the three structures, plus additional personal mixes for myself and Ada, who sang and played the drum pad and guitar during the show. All the performers were on wireless in-ears so that we could hear the cue lines, click tracks, and countdowns for staging and movement. To the audience, it looked and felt seamless, as if the technical elements were right in-step with us and responding to our actions. In truth, it was a lot of timing down to the millisecond to make sure everything timed out correctly!