For a documentary produced in another module, I took the job of sound editor/mixer and part-sound recordist (I operated camera on the first interview conducted, and so wasn't able to record sound, and I wasn't able to attend the second interview, but I did record atmoses and music).
This was a relatively straightforward process, as the documentary was constructed purely of interviews, and little space was left in the edit in which there wasn't speech, meaning my job was mainly to tidy up the interview recordings, balance the levels and do as good a job as I could to make the two interviews consistent in how they sounded (the first was recorded with a tie-clip microphone and the second with a shotgun - and both were recorded by different people).
This meant I had a lot of work to do in automating the levels of each track containing audio that would end up in the final mix. This mostly meant crossfading between the two interviews and the atmos tracks I "attached" to each.
I was asked by my director to either find or write and record some music that would be appropriate for the film. I began tweaking a piece of music I wrote some time ago, but was told it sounded "too sci-fi" as it was synthesised and the director wanted music that would fit the picture more naturally. In the end - as I was asked to do this whilst in university on the Friday afternoon (and the deadline was Monday morning), and I also had to sit with the editor to give a second opinion when needed - I didn't have time to write or to record new music that fit the requirements I was given, so instead I re-used a piece of music I recorded for a project I was involved in over Summer (embedded below).
I had recorded the music on an acoustic guitar and then recorded a counter-melody on an unplugged electric guitar, which resulted in a "tinny"-sounding effect. This presented a problem for the documentary, as the music was most resonant around the 900Hz-6Hz frequency range, which is shared by the human voice, meaning the music clashed with the dialogue.
I was also asked to make the music sound as though it could be a busker in the background of the recordings, which led to my decision to process the music as such:
...and to keep the music at a very low volume in the mix.
The last piece of interview sound used in the documentary was of a homeless man telling the story of his attempted suicide. For this part, I felt the sound needed to reflect the delicate subject matter, and so I made the decision to fade out any unnecessary sound (atmos tracks and music) and leave only the interview recording playing. For this I took inspiration from a scene David Lynch's 'Lost Highway', in which all diegetic sound is either reduced or removed completely to pull the viewer's attention to a conversation between two men. The effect of doing so adds an uneasy level of intimacy to a film, and in the case of my documentary, hopefully had the effect of drawing the audience's attention subtly but very firmly towards the seriousness of what's being said.
This then provided a nice transition to the last thing we see on-screen in the film:
To accompany this and hopefully end the film on a poignant note, I reintroduced an atmos track of city traffic and faded in the ending bars of the music, letting the final chord fade out as the film ends.
The scene from 'Lost Highway' which inspired my decision:
Thursday, 20 December 2012
Wednesday, 12 December 2012
Why hum on your recordings can be good
Now, to ease me into discussing more technical things on this blog...
I read an article on the BBC website today that talks about a good use for the hum you get on your recordings, which I found interesting (although from a sound recording perspective, I still don't like that bloody hum).
Mains hum is a frequency that audio equipment picks up for a variety of reasons. In most of the world (the blue-coloured countries on the map below), the hum is emitted at 50Hz.
When recordings are played back as evidence in court hearings, people might claim that the recording has been edited to change the meaning of what's been said. Thanks to the hum, a technique called Electric Network Frequency (ENF) analysis can now be used to determine whether or not recordings have been altered because, as the fluctuations in frequency are unique to each day, the hum extracted from the recordings can be compared to the recordings of the hum that the Metropolitan Police have been capturing for the last seven years. If the hum in both recordings matches, the recording hasn't been tampered with. If it doesn't match, it's evidence of manipulation and can't be used as evidence in court.
I read an article on the BBC website today that talks about a good use for the hum you get on your recordings, which I found interesting (although from a sound recording perspective, I still don't like that bloody hum).
Mains hum is a frequency that audio equipment picks up for a variety of reasons. In most of the world (the blue-coloured countries on the map below), the hum is emitted at 50Hz.
However, a quote from Dr. Alan Cooper in the BBC article explains
"because the supply and demand [of electricity] is unpredictable", each
day there are different fluctuations in the amount of energy distributed
over the National Grid which cause small fluctuations in the tone of
the hum. Because there's one National Grid to supply electricity to the entire nation, the hum throughout Britain is exactly the same frequency at the same time, wherever you are.
Tuesday, 11 December 2012
File management
One issue I faced in the last project was that I often imported a file to Soundtrack Pro to test a sound out and then, without copying it into the right folder on my hard drive or naming it appropriately, using it in the final mix.
This meant I wasted some mixing time relocating files and meant presenting via the network was messy.
For this project, I have decided to take the 'category-based file name' model of media management Ric Viers suggests in his book 'The Sound Effects Bible'.
An example of how I named files at my most organised point in the last project would be: "paper rustle 1", which was followed by "paper rustle 2" all the way up to "paper rustle 5". These files were in a folder I'd designated to foley recordings, and atmoses I kept in a separate folder.
Having files in different places risks more issues as there's no single place to tell the software to look if the sounds need relocating, as they did when I copied my project across to the network to present my work.
However, with the category-based system, I would save each file into the same folder and structure its name like this:
FOL_PAPER_RUSTLING_01
The "FOL" will replace the dedicated folder for foley recordings, and different types of sound will be differentiated with different categories. So "corridor atmos" would become:
AMB_CORRIDOR_AIR-CON
Hopefully this will help me be more organised and save me time in my current project, and will also mean the recordings will fit nicely into any larger sound library I might put together in the future.
I'm also planning to give myself a more strict colour-code to follow when editing and mixing the sound. This will help me group different types of sound as the mixer becomes filled with more tracks, making it easier to find any particular track when I want to.
This meant I wasted some mixing time relocating files and meant presenting via the network was messy.
For this project, I have decided to take the 'category-based file name' model of media management Ric Viers suggests in his book 'The Sound Effects Bible'.
An example of how I named files at my most organised point in the last project would be: "paper rustle 1", which was followed by "paper rustle 2" all the way up to "paper rustle 5". These files were in a folder I'd designated to foley recordings, and atmoses I kept in a separate folder.
Having files in different places risks more issues as there's no single place to tell the software to look if the sounds need relocating, as they did when I copied my project across to the network to present my work.
However, with the category-based system, I would save each file into the same folder and structure its name like this:
FOL_PAPER_RUSTLING_01
The "FOL" will replace the dedicated folder for foley recordings, and different types of sound will be differentiated with different categories. So "corridor atmos" would become:
AMB_CORRIDOR_AIR-CON
Hopefully this will help me be more organised and save me time in my current project, and will also mean the recordings will fit nicely into any larger sound library I might put together in the future.
I'm also planning to give myself a more strict colour-code to follow when editing and mixing the sound. This will help me group different types of sound as the mixer becomes filled with more tracks, making it easier to find any particular track when I want to.
Reading about actual sound design for No Country For Old Men
For Exercise 1, I chose a film I hadn't seen before because I wanted to approach the sound design with an open mind, rather than either imitating or trying hard not to imitate the original sound design for the film. Having finished the exercise now, I've decided to look into the actual sound design used in the Coen Brothers film - though it's worth noting I still haven't seen the film (it's on my Christmas list).
In an article for the New York Times, Dennis Lim says of the film: "It is not a popcorn movie. Which is to say, it is especially ill-suited to the crunching of snacks or the crinkling of wrappers or any of the usual forms of movie-theater noise pollution". He goes on to say "In some of the most gripping sequences what you hear mostly is a suffocating silence. By compelling audiences to listen more closely, this unnervingly quiet movie has had the effect of calling attention to an underappreciated aspect of filmmaking: the use of sound".
From this extract, I can take that I at least took a similar approach in my own sound design for the clip, which was a minimalistic one.
A block quote from the film's composer Carter Burwell's blog discusses the clip I worked on for Exercise 1:
"There is at least one sequence in “No Country for Old Men” that could be termed Hitchcockian in its virtuosic deployment of sound. Holed up in a hotel room, Mr. Brolin’s character awaits the arrival of his pursuer, Chigurh. He hears a distant noise (meant to be the scrape of a chair, Mr. Berkey said). He calls the lobby. The rings are audible through the handset and, faintly, from downstairs. No one answers. Footsteps pad down the hall. The beeps of Chigurh’s tracking device increase in frequency. Then there is a series of soft squeaks — only when the sliver of light under the door vanishes is it clear that a light bulb has been carefully unscrewed."
This shows how the sound is used to tell the story in the film (and how it can be employed to tell story in film in general): The "distant noise" prompts a response from "Mr. Brolin's character", but we don't see a chair being scraped or who is responsible for the footsteps, but these sounds explain exactly what is happening, yet because we are only experiencing half of it - hearing and not seeing, and so not seeing who is lurking around outside - the tension is built much more effectively.
Often I feel that music is misused in film and television. Some films wouldn't work well without music - Michel Gondry's Eternal Sunshine Of The Spotless Mind wouldn't have the same tone or emotion to it; The Lord Of The Rings wouldn't have the same 'hugeness' to it - and others wouldn't work with it - Michael Haneke's Funny Games only uses music to juxtapose different types of characters, and otherwise plays on silence to build an atmosphere.
Burwell clearly took the latter approach. On his blog he writes:
"The film is the quietest I've worked on... It was unclear for a while what kind of score could possibly accompany this film without intruding on this raw quiet. I spoke with the Coens about either an all-percussion score or a melange of sustained tones which would blend in with the sound effects - seemingly emanating from the landscape. We went with the tones."
In an article for the New York Times, Dennis Lim says of the film: "It is not a popcorn movie. Which is to say, it is especially ill-suited to the crunching of snacks or the crinkling of wrappers or any of the usual forms of movie-theater noise pollution". He goes on to say "In some of the most gripping sequences what you hear mostly is a suffocating silence. By compelling audiences to listen more closely, this unnervingly quiet movie has had the effect of calling attention to an underappreciated aspect of filmmaking: the use of sound".
From this extract, I can take that I at least took a similar approach in my own sound design for the clip, which was a minimalistic one.
A block quote from the film's composer Carter Burwell's blog discusses the clip I worked on for Exercise 1:
"There is at least one sequence in “No Country for Old Men” that could be termed Hitchcockian in its virtuosic deployment of sound. Holed up in a hotel room, Mr. Brolin’s character awaits the arrival of his pursuer, Chigurh. He hears a distant noise (meant to be the scrape of a chair, Mr. Berkey said). He calls the lobby. The rings are audible through the handset and, faintly, from downstairs. No one answers. Footsteps pad down the hall. The beeps of Chigurh’s tracking device increase in frequency. Then there is a series of soft squeaks — only when the sliver of light under the door vanishes is it clear that a light bulb has been carefully unscrewed."
This shows how the sound is used to tell the story in the film (and how it can be employed to tell story in film in general): The "distant noise" prompts a response from "Mr. Brolin's character", but we don't see a chair being scraped or who is responsible for the footsteps, but these sounds explain exactly what is happening, yet because we are only experiencing half of it - hearing and not seeing, and so not seeing who is lurking around outside - the tension is built much more effectively.
Often I feel that music is misused in film and television. Some films wouldn't work well without music - Michel Gondry's Eternal Sunshine Of The Spotless Mind wouldn't have the same tone or emotion to it; The Lord Of The Rings wouldn't have the same 'hugeness' to it - and others wouldn't work with it - Michael Haneke's Funny Games only uses music to juxtapose different types of characters, and otherwise plays on silence to build an atmosphere.
Burwell clearly took the latter approach. On his blog he writes:
"The film is the quietest I've worked on... It was unclear for a while what kind of score could possibly accompany this film without intruding on this raw quiet. I spoke with the Coens about either an all-percussion score or a melange of sustained tones which would blend in with the sound effects - seemingly emanating from the landscape. We went with the tones."
The Randy Thom-Forrest Gump video on Youtube
In this video on Youtube, Randy Thom (sound designer and re-recording mixer for Forrest Gump) talks about the process of creating sound for the ping-pong sequence in the film Forrest Gump.
According to Thom, "all of the ping-pong sounds that you hear in the movie were done in post-production" because "none of the sounds that were recorded when the camera was rolling were useable".
As simple as this might sound, each of the ping-pong sounds were recorded individually and the sound recordists "used various kinds of paddles", hiting the balls "off various kinds of surfaces".
When recording foley for Sound Exercise 1, I took care to make sure each sound was recorded on the same surface as seen on screen. However, Randy Thom and the sound recording team for Forrest Gump took the mindset that "we're going for what the best sound is, not necessarily literally what you're seeing on the screen" - yet all the sounds in the sequence sound appropriate for the surfaces we're seeing on-screen. This is a good reminder of how flexible sound recording can be.
Another technique I've been trying to employ for the current sound exercise has been to record sounds in sync with the picture, rather than recording sounds separately and creating sync in editing software. This is largely to save time in the edit, but also because I have a tendency to record too few sound effects and have to repeat sounds in the edit.
This wasn't the method used on Forrest Gump - because they recorded each hit separately, meaning, Thom says, "editors worked through the night to cut each one of those hits in sync", which, he says, "was a huge job".
The Youtube video:
According to Thom, "all of the ping-pong sounds that you hear in the movie were done in post-production" because "none of the sounds that were recorded when the camera was rolling were useable".
As simple as this might sound, each of the ping-pong sounds were recorded individually and the sound recordists "used various kinds of paddles", hiting the balls "off various kinds of surfaces".
When recording foley for Sound Exercise 1, I took care to make sure each sound was recorded on the same surface as seen on screen. However, Randy Thom and the sound recording team for Forrest Gump took the mindset that "we're going for what the best sound is, not necessarily literally what you're seeing on the screen" - yet all the sounds in the sequence sound appropriate for the surfaces we're seeing on-screen. This is a good reminder of how flexible sound recording can be.
Another technique I've been trying to employ for the current sound exercise has been to record sounds in sync with the picture, rather than recording sounds separately and creating sync in editing software. This is largely to save time in the edit, but also because I have a tendency to record too few sound effects and have to repeat sounds in the edit.
This wasn't the method used on Forrest Gump - because they recorded each hit separately, meaning, Thom says, "editors worked through the night to cut each one of those hits in sync", which, he says, "was a huge job".
The Youtube video:
Thursday, 6 December 2012
Lord Of The Rings' sound design
There are a series of videos on Youtube (embedded below) based around the sound design for the first two films of the Lord Of The Rings film trilogy.
The studio used to record foley and edit/mix the sound was purpose-designed for the sound team to have access to the best and most appropriate equipment available. This was done due to the size of the job of designing sound for the films - the soundtrack is responsible for making the invented world of Middle Earth seem real, but, unlike Star Wars, the sounds all have to be rooted in nature, as Middle Earth is based on Earth centuries before industry and the technology it brought about.
According to the interviews conducted with the sound team, almost every sound in the film was recorded in post - even the dialogue was almost entirely ("about 98%") recorded in ADR. This was because the studio used for filming was in close proximity to the main airport in Wellington, meaning the sound was constantly interrupted by air traffic.
Although Middle Earth is based on Earth, is many mythical elements and many invented creatures. An example of an invented creature is the Cave Troll seen in The Fellowship Of The Ring. The creature had to sound big, strong and frightening, and in the end its voice (which we hear through growls and roars) was created by processing recordings of real animals' growls and roars. To add authenticity to the sound of the cave troll, the technique of 'worldising' was used - the recordings of its growls were played back on speakers set up in tunnels on the outskirts of Wellington, and recorded "in this natural environment that is similar to the environment he [the Cave Troll] is in in the movie".
A less realistic creature in The Fellowship Of The Rings is introduced shortly after the Cave Troll is slain. When considering the sound design for the Balrog, Peter Jackson told the sound team "This is not a physical creature. It's basically shadow and flame." This meant that the sound designers had to decide how a giant creature made of 'shadow and flame' might sound, and so to match the visual representation they decided to give it a "natural, organic, rocky feel". The growl of the Balrog was created by processing the recording of a breeze block being scraped across a wooden panel, as the sound team felt it should sound rocky and lava-like.
A new and different type of creature was introduced in The Two Towers in Treebeard, essentially a talking, moving tree. John Rhys-Davies (who also plays the character of Gimli) was chosen to voice Treebeard, but obviously his voice-acting talent on its own couldn't make his sound tree-like, and so a specially-designed wooden cabinet was built with many ways for the sound to resonate when played through speakers in the cabinet, and this was recorded to produce a large, resonant, wooden quality to his voice, which was further processed digitally (judging from the way it sounds in the film).
The Two Towers introduced more difficult scenarios than the invented characters. The sound team were faced with the challenge of creating the sound of a ten-thousand-strong Uruk-Hai army chanting and marching. One way to create these sounds would be to record a large group of people performing the sounds and multiply the recordings in the mix, but the sound designers went for a more authentic sound. During the half-time of a cricket match in New Zealand, Peter Jackson and the sound recordists went on the pitch and recorded the crowd chanting phrases in Black Speech (the language spoken in Mordor, home to the 'baddies' of Middle Earth). The sound of marching was created by David Farmer, the sound designer for the film trilogy, who layered "tracks of volcano rumbles and things like this" and created the rhythm of the army marching by performing volume swells with the faders on the mixing desk. This created a good weight to be enhanced with foley sounds of metal armour clanking and feet stamping on soil.
The five videos I've taken this information from:
*** This video has embedding diabled, but the link is: http://youtu.be/RMNwotOm27g
The studio used to record foley and edit/mix the sound was purpose-designed for the sound team to have access to the best and most appropriate equipment available. This was done due to the size of the job of designing sound for the films - the soundtrack is responsible for making the invented world of Middle Earth seem real, but, unlike Star Wars, the sounds all have to be rooted in nature, as Middle Earth is based on Earth centuries before industry and the technology it brought about.
According to the interviews conducted with the sound team, almost every sound in the film was recorded in post - even the dialogue was almost entirely ("about 98%") recorded in ADR. This was because the studio used for filming was in close proximity to the main airport in Wellington, meaning the sound was constantly interrupted by air traffic.
Although Middle Earth is based on Earth, is many mythical elements and many invented creatures. An example of an invented creature is the Cave Troll seen in The Fellowship Of The Ring. The creature had to sound big, strong and frightening, and in the end its voice (which we hear through growls and roars) was created by processing recordings of real animals' growls and roars. To add authenticity to the sound of the cave troll, the technique of 'worldising' was used - the recordings of its growls were played back on speakers set up in tunnels on the outskirts of Wellington, and recorded "in this natural environment that is similar to the environment he [the Cave Troll] is in in the movie".
A less realistic creature in The Fellowship Of The Rings is introduced shortly after the Cave Troll is slain. When considering the sound design for the Balrog, Peter Jackson told the sound team "This is not a physical creature. It's basically shadow and flame." This meant that the sound designers had to decide how a giant creature made of 'shadow and flame' might sound, and so to match the visual representation they decided to give it a "natural, organic, rocky feel". The growl of the Balrog was created by processing the recording of a breeze block being scraped across a wooden panel, as the sound team felt it should sound rocky and lava-like.
A new and different type of creature was introduced in The Two Towers in Treebeard, essentially a talking, moving tree. John Rhys-Davies (who also plays the character of Gimli) was chosen to voice Treebeard, but obviously his voice-acting talent on its own couldn't make his sound tree-like, and so a specially-designed wooden cabinet was built with many ways for the sound to resonate when played through speakers in the cabinet, and this was recorded to produce a large, resonant, wooden quality to his voice, which was further processed digitally (judging from the way it sounds in the film).
The Two Towers introduced more difficult scenarios than the invented characters. The sound team were faced with the challenge of creating the sound of a ten-thousand-strong Uruk-Hai army chanting and marching. One way to create these sounds would be to record a large group of people performing the sounds and multiply the recordings in the mix, but the sound designers went for a more authentic sound. During the half-time of a cricket match in New Zealand, Peter Jackson and the sound recordists went on the pitch and recorded the crowd chanting phrases in Black Speech (the language spoken in Mordor, home to the 'baddies' of Middle Earth). The sound of marching was created by David Farmer, the sound designer for the film trilogy, who layered "tracks of volcano rumbles and things like this" and created the rhythm of the army marching by performing volume swells with the faders on the mixing desk. This created a good weight to be enhanced with foley sounds of metal armour clanking and feet stamping on soil.
The five videos I've taken this information from:
*** This video has embedding diabled, but the link is: http://youtu.be/RMNwotOm27g
Wednesday, 5 December 2012
Reflecting on Exercise 1
I learned a few things from presenting my work on Thursday. Having the opportunity to watch other people's projects and see how they compared with my own, it was very clear to me that I didn't mix loudly enough (although the air-con in the edit suite didn't help much).
This came up in the feedback I was given, and I was given a guide for levels to aim for when mixing for film:
- Dialogue should average somewhere between -20dB and -12dB (giving a lot of dynamic range, since voice is a very dynamic element)
- Atmoses, etc., should average a level below -20dB
- Anything below -30dB is generally too quiet, as it won't be heard during playback
Ron also explained how to 'calibrate' a mixer and recorder set-up. When using the Wendt mixer with a Marantz recorder, the '0dB' reference tone from the Wendt should measure at -18dB on the Marantz's meters. This is because of the difference between digital and analogue meters, which is a topic I plan to research in more depth.
The feedback given was helpful in giving an idea both of what worked and what else I could have done for this project. For example, Ron noted that there was an opportunity, and perhaps a need, to use a musical element at the point in the clip where the camera zooms into a close-up on the character's face, as this is a turning point within the context of the scene and film.
Aside from the issues with the mix, the feedback I was given told me that the mix itself, while too quiet, was well-balanced, and Neil said that sounds such as footsteps and props being placed down sounded as though they were recorded on the same type of surfaces as we see in the picture. I recorded thing such as footsteps and the case being put down on the floor on both carpet and on wood surfaces to make sure I had the right sounds in the edit.
Overall, I have taken into account the feedback I was given, and also feel that I could have included more detail in the soundtrack - in particular, I think I should have recorded more for the 'cloth' track, as this is an important and often overlooked detail of foley work. In my research, I've learned how important professional sound recordists consider this to be.
This came up in the feedback I was given, and I was given a guide for levels to aim for when mixing for film:
- Dialogue should average somewhere between -20dB and -12dB (giving a lot of dynamic range, since voice is a very dynamic element)
- Atmoses, etc., should average a level below -20dB
- Anything below -30dB is generally too quiet, as it won't be heard during playback
Ron also explained how to 'calibrate' a mixer and recorder set-up. When using the Wendt mixer with a Marantz recorder, the '0dB' reference tone from the Wendt should measure at -18dB on the Marantz's meters. This is because of the difference between digital and analogue meters, which is a topic I plan to research in more depth.
The feedback given was helpful in giving an idea both of what worked and what else I could have done for this project. For example, Ron noted that there was an opportunity, and perhaps a need, to use a musical element at the point in the clip where the camera zooms into a close-up on the character's face, as this is a turning point within the context of the scene and film.
Aside from the issues with the mix, the feedback I was given told me that the mix itself, while too quiet, was well-balanced, and Neil said that sounds such as footsteps and props being placed down sounded as though they were recorded on the same type of surfaces as we see in the picture. I recorded thing such as footsteps and the case being put down on the floor on both carpet and on wood surfaces to make sure I had the right sounds in the edit.
Overall, I have taken into account the feedback I was given, and also feel that I could have included more detail in the soundtrack - in particular, I think I should have recorded more for the 'cloth' track, as this is an important and often overlooked detail of foley work. In my research, I've learned how important professional sound recordists consider this to be.
Subscribe to:
Posts (Atom)