A 5, 6, 7, 8! It's Opening Night!
I'm back at the Ordway in St. Paul, MN, for my second show - A Chorus Line! It's great to be back here with all the familiar faces on the staff and crew working on such a classic piece. James Rocco, Artistic Director of the Ordway and director of my previous Ordway show, Pirates of Penzance, is back in the directing chair for this one. The other members of the design team, lighting designer Pam Gray and costume designers Mary Beth Gagner and Andrea Gross, are new to me, as is much of the cast, but we've all become friendly quickly.
This show, from a sound perspective, is deceptively simple. It was written before wireless microphones were so prevalent on Broadway stages, so the orchestrations are careful to keep out of the way of the vocals and the staging brings actors front and center to sing or speak (the days of footmics!)
What could be easier?
Well, for one, the choreography is so iconic and central to the piece that there are many elements that simply aren't up for discussion. The big kick-lines where everyone wears top-hats? Yeah, good luck changing that! I started the cast with their mics (predominantly Countryman B6's) at the center of their foreheads, hoping not to forfeit the ideal mic position for the entire show because of two musical numbers. I managed to keep most of the cast in that position, but a few of the actors (due to hairstyle, comfort, etc) were moved to ear rigs.
Additionally, the character of Zach, who is the choreographer of the show the dancers are auditioning for, spends 90% of the show sitting at a table in the middle of the house, on a desktop microphone, interacting with the cast - asking questions, running the audition. So what's the problem? He's sitting in one spot, with a big ole' mic right in front of his face! Sounds perfect!
Physics just isn't on my side for this one. Picture Zach sitting amongst audience members in the middle of the theater. There is a circle of audience members all around Zach who will, no matter how loud I make Zach's mic, hear his acoustic voice before the amplified sound from the speakers at the proscenium. We, as sound designers, spend a lot of time aligning the speakers so that the sound from all the different speakers arrive at the audience members at the same time. This alignment improves intelligibility and allows us to delay the sound system as a whole back to the actors speaking onstage. Since the speakers are closer to the audience than the actors are (usually), we can delay the speakers so the amplified sound arrives at the same time as the actors' acoustic, un-amplified sound coming from their mouths. This is all well and good...until you have an actor in front of the speakers. Aside from the danger of feedback, there's simply no way to speed the sound up so that the amplified sound from the speakers reaches the audience at the same time as the actor's acoustic sound.
There's a branch of acoustics called "psychoacoustics" which deals with how our brains interpret sound. I find it wildly interesting and informative, it also happens to be wildly important and influential in what I do as a sound designer. So, what does psychoacoustics have to say on the issue of an actor sitting in the middle of the theater? Well, when the brain receives the same sound twice, it can react to that event in a few different ways. If the sounds arrive within ~5 milliseconds of each other, the brain fuses the two sounds into one and determines the directionality of the sound from which sound is louder. Between 5ms - 10ms, the brain sources the direction of the sound to whichever sound reached the ear first, even if the second sound is louder than the first. But beyond roughly 50ms (all these times are dependent on the sound - our brains are more attune to human speech than, say, the sound of an orchestra), your brain hears the two sound as separate and distinct - you've got an echo. Echo and intelligibility aren't exactly the best of friends. It's a little like those logic problems from high school: two trains leave their stations at the same time... Imagine a train leaving an actors mouth travelling 1 foot per ms (roughly the speed of sound) and imagine another train leaving a speaker travelling the same speed. If the actor is only 5 feet from you, but the speaker is 60 feet...well, those trains are going to arrive 55ms apart..and you'll hear an echo. OK, that was a bad metaphor.
As there's little to be done about the arrival times for the people around Zach, my goal was to make it a clear, crisp sound so that I could keep as much intelligibility as possible, which still making interesting conceptual choices. The end result is that we bring Zach's desk mic into the console on two different channels. One channel is sent to the main proscenium vocal system, and has a crazy compressor on it, along with a high-pass EQ to make it sound "mic-y". The second channel, with identical input signal, is delayed 60ms, and then sent to a reverb and bussed out to the rear and side surrounds. The reverb (which has a sort of short room verb setting) is also bussed to the surrounds. The two channel trick allows me to quickly mix the "dry" signal with the "effect" signal, and allows me to delay the two channels differently. My aim is to create a sense that Zach's voice is coming from the middle of this cavernous, empty Broadway theater, with his voice bouncing off the walls and the empty seats.
It's been a fun tech as always here in St. Paul (though a good deal colder than when I was here in July...) and I hope I have another chance to return and design here! The people are great, the hall is beautiful, and it's a supportive team. What could be better?