Can we talk about Autotune… or rather, pitch manipulation?
I saw the new live-action Beauty and the Beast, and I absolutely loved it. Alan Menken is one of my top 5 favorite composers, and this particular story has always captivated me. (When I first got into musical theatre, I only really cared about the story as it served the music; the story as an excuse to have music.) I think that Disney did a great job bringing this cartoon and story to life on all fronts. Well, except one.
Flashback to 2004 when I finally was able to purchase Pro Tools (version 5!). I was so excited to finally be able to do multitrack recording/mixing! At last, I was on my way to becoming a platimum record producer. My singing would be heard on top 40 radio for decades to come. The cover song I chose to record (Prosthetic Head by Greenday) was going to be just as great, if not better, than the original. That is, until I heard my clear and distinguished voice, unadulterated by other muddying tracks; incontrovertible proof that I was a terrible singer. Clenched, forced, throat-y, with questionable pronunciation, and all topped off by the WORST pitch I had ever heard. I was crushed. Those particular dreams died that day, but I finished the cover song anyway because I wasn’t abandoning my pursuits in audio for being a bad singer. (13 years of practice later, and I’m not the worst singer anymore. I might even actually be in the top 40% of the population… but don’t quote me on that.)
By 2007, I was still an impressively bad singer. My dad bought me an Mbox 2 Mini audio interface for Christmas, which came with Pro Tools 7 and a limited license to Melodyne – in my view, the absolute best pitch manipulation software. I tried it on a couple phrases of my singing, and WOW, it sounded like… is that actually my voice? Yeah, that’s totally me, and I actually sound… dare I say it… good!!!
Why does pitch-correction make someone’s voice sound like they can sing? I spent weeks theorizing on it, and decided that the smoothed out intonation and exact center pitches were so compelling in a “performance”, that the brain allows it to overshadow bad singing technique; a cognitive order of importance perhaps. It then occurred to me that though my quality of voice would never be amazing, if I were able to sing pitches perfectly, that I would actually sound like the recordings I was manipulating – still haven’t quite been able accomplish do that one. Suffice it to say that I’ve used Melodyne on every recording of my singing since then.
Back to my one reservation about Beauty and the Beast: I think they butchered Emma Watson’s voice with a dull cleaver. I can hear the music supervisor yelling, “Keep smashing her voice until it’s good enough for a McDonald’s hamburger!” It must be the most un-artful representation of the human voice to emerge in the history of Walt Disney Studios.
Was that dramatic enough?
I won’t bother trying to describe what I hear because I know you’ll easily hear it too. Her voice stood out among the cast as the most processed and least natural. There was no consistency in processing style. I would have expected a universal approach. Everyone T-Pained? Sure, as long as it’s everyone on the same algorithm. No processing at all? Cool, but cast singers of complimentary talent. Belle’s voice is a flat-line, while Plumette has a beautiful operatic voice with plenty of expressiveness and vibrato. The disparity is egregious! Listen to the Avenue Q soundtrack (no pitch-correction) and then listen to Spring Awakening (judicious pitch-correction); they are both very consistent vocal experiences.
My feeling on pitch correction is that in an ideal world, it wouldn’t be used. (Newsflash! My singing is NOT ideal.) Pitch correction and all the other vocal manipulations can help to create a larger-than-life experience, which can be a significant part of storytelling. I’m okay with that. But I think that the art of music is more important than the product of music: to admire what is, above what it isn’t. I don’t think Belle’s “perfect” singing helped to tell this story. Disney has near limitless resources; vocal coaches, studio time, incredible sound engineers, and more; so why did they choose for the story’s hero to sound more like an automaton than the deeply emotional character that she is? Manipulating pitch can be done artfully, and was so for most of the other characters, so it was an artistic choice to sledgehammer the controls on our hero. I wanted to hear her sing, and I don’t feel like that’s what I got.
Emma Watson might not be a well-trained singer. I did hear a couple of strange-sounding resonant placements that could indicate less experience than some of her cast members, but I thought her vocal quality was very sweet and quite fitting for the role. I disagree that recklessly smashing her voice was necessary in order to achieve a Disney-quality product.
Even in a circumstance with very limited resources – let’s say, a first-time self-recording by a highly unskilled singer, without any coaching or a history of singing – a more natural outcome is possible.
So, I pulled out my archives and opened up that first-ever Pro Tools recording. Yep, it was bad. I Melodyned (past-participle of “to Melodyne”) those original 2004 vocals as naturally as I could while still making them sound acceptable, pitch-wise. This is really bad singing complimented by a lack of resources, still sounding more natural than what Disney did with Emma. My point? This track is ultimately still bad, but she could have absolutely sounded more natural, more expressive; more human.
Final thought: I loved Disney’s new Beauty and the Beast… I wouldn’t be so frustrated about this if I didn’t.
Here’s the original 2004 recording, with some 2017 mixing to clear up the parts, with original heart-crushing vocals intact:
And here’s the exact same mix, but with Melodyned vocals:
And since this is my first post with audio, I’d rather not leave you with the worst of my portfolio. So here’s one of the better examples my audio-musical progress in the last 13 years: a cover of “Genghis Khan” from artist Mike Snow: