Friday 21 December 2012

Final Chapter: Evaluation

Since I became interested in film-making, I’ve had a strong interest in particular in sound for film. This is because I had played music for some years and in recent years have been increasingly interested in the more tech-based recording and mixing side of music-making. I have always enjoyed “getting lost” in editing and mixing recorded and synthesised music, because of the freedom to be creative and that there are no “right” or “wrong”.
    Despite this, before this module I had very little experience and no confidence in recording audio. The lack of confidence was probably a consequence of the lack of experience in this field. As a beginner, being aware of the various issues sound recordists face - unwanted noise/noisy environments, the art of microphone placement, mic/pre-amp noise, etc. - can give the impression that sound recording is a very scientific and precise, and ultimately difficult thing to do.

Over this module, my impressions on sound recording have changed as I’ve learned more and gained more experience. I have learned that a certain joy can be had from experimenting with different recording techniques and equipment, and that often the best results are the most unexpected ones and tend to occur the more creative you get in your recording, and the more you have while doing it.
    I learned this during a workshop earlier-on in the module, in which we were given a selection of microphones and told to choose two different types and spent a set amount of time collecting sounds with our choices. I chose the Rode NT4 and a pair of Aquarian Audio hydrophone mics. I decided to place the two hydrophones onto the sides of an escalator using the rubber contact-mic converters. This produced an interesting low-frequency rhythmic and pulsating sound. This made me realise that these contact-mics were, in a way, able to capture sounds in ways that that our ears would never pick up, because they pick up sound in a different way to human ears - however, I was brought back to the real world when I was told by a member of staff in the place I was recording that I had to stop because, understandably, the microphone cable trailing across the top of an escalator was a health-and-safety concern.
    I came to appreciate real-world recording with the Rode NT4 shortly after, when collecting atmos recordings for my documentary module, which required me to find somewhat remote locations and record, which I did at night-time to minimise the number of disruptions. I have found that atmos-track recording can be quite relaxing as it forces you to pay attention to self-made sounds, and sounds around you - which is something that people often forget due to the distracting nature of many modern technologies, such as MP3-players.

I chose the Coen Brothers’ ‘No Country For Old Men’ for the sound-to picture Exercise 1, because it appeared to be the most straight-forward and the least “daunting” choice for my first attempt at a sound-to-picture exercise. I feel that this was both a good choice and a bad one. I gained experience and confidence from recording atmoses and foley sounds for the piece, and gave consideration when preparing the atmos recordings - to create the sound of an American hotel room, I recorded the sound of an American sitcom playing in a different room to the microphone, using the technique of worldising, which is the process of ‘playing back existing recordings through a speaker or speakers in real-world acoustic situations, and recording that playback with microphones so that the new recording so that the new recording takes on the acoustic characteristics of the place it was “re-recorded”’ (FilmSound.org). However, due to the minimalistic nature of the film in relation to its sound requirements, I feel I didn’t push myself very much to develop wider sound design skills such as creating and making use of designed sound or music/musical elements.
    Nevertheless, I feel I learned from the experience gained from this exercise and from the feedback given both from other students and from my lecturers. I had mixed most of the sound in the Protools studio on fairly loud speakers. This led to the levels of the sounds in my mix peaking well below -20dB, which meant the playback was very quiet when I presented the work (although this at least shows that I was mixing with my ears according to what the speakers were feeding me, rather than mixing with my eyes being fixed to Soundtrack Pro’s VU meters). From the feedback given, I have learned that the correct levels to mix sound for film at are: between -20dB and -12dB for dialogue, and below -20dB for atmoses and ambient sounds.
    I also learned from the feedback I was given that most of my sounds underwent very little processing to enhance the tone of the sound design. This is largely because of the minimalistic approach I took, and while this was deliberate, it probably didn’t enhance my skills as much as it would have had I been more creative in my approach. After completing Exercise 1, I watched the scene from ‘No Country For Old Men’ online and was able to reflect on the storytelling-related details in the original sound - such as the sound of someone offscreen unscrewing a lightbulb, which in the film is a significant, but subtle, touch. However, being unfamiliar with the film, I wasn’t able or required to have this level of input in telling the story of the scene. Overall, this exercise helped me improve my skills and develop a better and deeper understanding of the processes that go into film sound.

After lots of consideration, taking into account what I had learned from Exercise 1, I chose the clip from Jean-Pierre Jeunet and Marc Caro’s ‘Delicatessen’ for my Major Project. As well as what I’d learned from Exercise 1, I was inspired to make this choice from a workshop during the module in which we collectively went over each of the choices, comparing the sound requirements of each, and also by the strong and moody visual style of the clip. I felt that this clip would give me an opportunity to combine the foley techniques I developed earlier in the module with new approaches to sound, as the opening sequence clearly required a strong designed soundtrack to match the picture.
    To match the postapocalypse-esque wasteland seen in the opening sequence and the very strongly orange-coloured visuals, I used a sound I had originally recorded but ended up not using for a poetic documentary I made for another module. The sound came about from recordings I made while holding a pair of tie-clip microphones down roadside grates on a windy night. The sound of the wind resonated and reverberated within the drains, and the recordings alone produced a powerful low-frequency “rumble”, which I later processed in sound editing software, the process of which I outlined on my blog for this module. Taking feedback from Exercise 1 into account, I processed this “rumble” to sound something like wind, which I achieved by boosting a particular frequency using Soundtrack Pro’s channel EQ and using automation to boost the gain of this frequency over time, and also to change the frequency to give the feeling of movement within the sound (evidenced in the following screenshot).
    In order to quickly set the tone of the clip immediately, I combined this wind sound with a series of chords played on a synthesised organ in the music software Reason. Only having basic understanding of chord structure, I chose to progress from a minor-sounding chord to a major chord and then to a dissonant-sounding chord, in part to create a sense of drama, as major chords tend to sound “happy”, which is then contrasted by the “scary-sounding” dissonant chord, shortly followed by the sound of a knife being sharpened. The “cheesiness” of this music was also intended to add a slightly humorous element to the overall mood to match the dark humour seen in the clip.
    The sound of the knife being sharpened preceeds the introduction of the Chef or even the kitchen setting in order to add a sense of unease, as hearing before seeing typically invokes a sense of curiosity in the audience, which in this case is intended to encourage the audience to question why a knife is being sharpened in that desolate landscape. Again, learning from feedback from Exercise 1, the knife sound is “drenched” in reverb, largely to add the sense of horror that comes from exaggerated and somewhat unrealistic sounds.
    In contrast with Exercise 1, for which I chose to explore the sound from the main character’s point-of-view, I decided to follow the camera’s point-of-view for this task, and so added details such as being able to hear the Chef breathing heavily as the camera passes over his shoulder.
    Because the knife sound has to be heard when we enter the (for lack knowing the character’s real name) Victim’s bedroom, the reverb presented a potential problem as the camera moves through pipes, as this would typically be where reverb naturally begins to overwhelm the original sound. To compensate, I added a stereo delay effect and automated the mix balance (as evidenced in the screenshot to the left) to add more delay the further the camera progresses through the pipes. This also served the purpose of speeding up the pace at which the knife appears to be sharpened, as when, later in the scene, there is a cut back to the kitchen, the Chef is sharpening much more quickly than when we last saw him. The feedback I was given when the work was presented suggested that the reverb should have been introduced as the camera approaches the pipes, but I feel that the reverb is effective in keeping the overall tone of the opening sequence consistent with the intended emotional response, and that the delay is effective in adding the sense of hearing the sound from inside the pipes.
    I also experimented when creating the sound of the bin lorry. This was composed of three different recordings, which were each EQed to fit together without “clashing” due to overlapping in sensitive frequency ranges. The first sound introduced is taken from a recording made using two contact mics on the sides of an escalator, which was intended to act as the low-frequency engine sound of the lorry - however, because of the rhythm the escalator gave to the recording, this also worked well as a musical element to help the sequence beginning in the bedroom and ending when the Victim is shown to be hiding in the bin flow and add a sense of suspense to the soundtrack.
    I then added recordings of boiler rooms which I collected when I was allowed access to plant rooms inside Sheffield Children’s Hospital to both serve as a piercing high-frequency sound, and as a mid-range-frequency used to show the movement of the lorry, which cuts out when the driver brakes. The high-frequency sound is introduced before the scene moves outside, and this is largely because, when mixing I realised I hadn’t recorded any sound to use when the Chef runs his fingers along the knife. However, I feel this worked in my favour, as I was able to use this sound to represent this as well as to introduce the bin lorry before it appears on screen. Again, my approach to creating the sound of the bin lorry was centred on creating something that sounded unreal and, hopefully, scary, rather than using the sound of a real engine, as this I felt would be too mundane and wouldn’t keep consistency with the soundtrack.
    The last new sound effect (“new” meaning a sound that hasn’t already been introduced in the project) used for my Major Project was the “thud” sound as the Chef swings the knife towards the Victim, just as the title appears on-screen. This was created using a pitched-down recording of a bag of compost being dropped onto the floor of a shed. The sound alone, even when pitched-down and EQed, didn’t have enough impact. To solve this, I applied a compressor to add “punch” to the sound. I began the compression by setting the threshold low and using the attack and release controls to time the compression appropriately. I followed a guide written to explain how to set-up a compressor for a snare drum and shortened the attack setting until the sound began to dull (Owsinski 2006, p.62) and then ease the threshold back until the thud had the right level of impact.

Overall, I feel I have progressed a lot throughout this module and have used the skills taught to good effect on my Major Project.



References
- FilmSound.org. Worldizing: A Sound Design Concept Created By Walter Murch. [online] Available at: http://filmsound.org/terminology/worldizing.htm [Accessed 20 December 2012].

- Owsinski, B., 2006. The Mixing Engineer’s Handbook. 2nd ed. Boston: Thomson Course Technology

Thursday 20 December 2012

Metering: Digital and Analogue

I noticed that the meters on two pieces of equipment I have used on this module are quite different to each other, and so I decided to research into metering to find out what the difference is, and what the need for the differences might be.

An article on the website Sound On Sound gives a good introduction to the two types of audio meters and what their differences are.

The simpler meter I have encountered is called a VU (Volume Unit) meter on the Wendt X3 mixer. The article explains that this was an early piece of metering technology, and the following quote has helped me understand how a VU meter works and why it might not be the most reliable way to check your levels when recording in the field:
Because the VU meter measures 'average' levels, a sustained sound reads much higher than a brief percussive one, even when both sounds have the same maximum voltage level: the reading is dependent on both the amplitude and the duration of peaks in the signal. In addition, the standard VU response and fallback times (around 300 milliseconds each) exaggerate this effect, so transients and percussive sounds barely register at all and can cause unexpected overloads.
 The other type of audio meter is that found on the Marantz PMD661 recorder. I've looked at specs online, but can't seem to find what type of audio meter this recorder has. However, the Sound On Sound article tells me that:
"The majority of digital recorders, mixers and converters therefore use true peak-reading meters whose displays are derived from the digital data stream. As these don't rely on analogue level-sensing electronics they can be extremely accurate."
In a sound & camera workshop during this module, Ron explained that the 0dB reference tone the Wendt mixer can send to a recorder should meter at around -18dB on the Marantz. According to this information, this is because 0dB would measure very differently on the Wendt's VU meter to how it could on the Marantz's "true peak-reading" meter. This is important to know because, as the article explains, with digital recorders such as the Marantz, there is no headroom to rely on - if a sound peaks at or above 0dB for any time at all, digital clipping immediately distorts the signal for as long as the sound peaks. Unlike the various types of analogue distortion people often claim add a nice character to music, such as valve overdrive or tape compression, digital clipping is a harsh-sounding type of distortion that is never desirable on a recording, as this distortion cannot be removed as it is, in a sense, the sound of missing digital data.

Editing and mixing sound for a documentary

For a documentary produced in another module, I took the job of sound editor/mixer and part-sound recordist (I operated camera on the first interview conducted, and so wasn't able to record sound, and I wasn't able to attend the second interview, but I did record atmoses and music).

This was a relatively straightforward process, as the documentary was constructed purely of interviews, and little space was left in the edit in which there wasn't speech, meaning my job was mainly to tidy up the interview recordings, balance the levels and do as good a job as I could to make the two interviews consistent in how they sounded (the first was recorded with a tie-clip microphone and the second with a shotgun - and both were recorded by different people).

This meant I had a lot of work to do in automating the levels of each track containing audio that would end up in the final mix. This mostly meant crossfading between the two interviews and the atmos tracks I "attached" to each.

I was asked by my director to either find or write and record some music that would be appropriate for the film. I began tweaking a piece of music I wrote some time ago, but was told it sounded "too sci-fi" as it was synthesised and the director wanted music that would fit the picture more naturally. In the end - as I was asked to do this whilst in university on the Friday afternoon (and the deadline was Monday morning), and I also had to sit with the editor to give a second opinion when needed - I didn't have time to write or to record new music that fit the requirements I was given, so instead I re-used a piece of music I recorded for a project I was involved in over Summer (embedded below).

I had recorded the music on an acoustic guitar and then recorded a counter-melody on an unplugged electric guitar, which resulted in a "tinny"-sounding effect. This presented a problem for the documentary, as the music was most resonant around the 900Hz-6Hz frequency range, which is shared by the human voice, meaning the music clashed with the dialogue.

I was also asked to make the music sound as though it could be a busker in the background of the recordings, which led to my decision to process the music as such:


...and to keep the music at a very low volume in the mix.

The last piece of interview sound used in the documentary was of a homeless man telling the story of his attempted suicide. For this part, I felt the sound needed to reflect the delicate subject matter, and so I made the decision to fade out any unnecessary sound (atmos tracks and music) and leave only the interview recording playing. For this I took inspiration from a scene David Lynch's 'Lost Highway', in which all diegetic sound is either reduced or removed completely to pull the viewer's attention to a conversation between two men. The effect of doing so adds an uneasy level of intimacy to a film, and in the case of my documentary, hopefully had the effect of  drawing the audience's attention subtly but very firmly towards the seriousness of what's being said.

This then provided a nice transition to the last thing we see on-screen in the film:


To accompany this and hopefully end the film on a poignant note, I reintroduced an atmos track of city traffic and faded in the ending bars of the music, letting the final chord fade out as the film ends.


The scene from 'Lost Highway' which inspired my decision:

Wednesday 12 December 2012

Why hum on your recordings can be good

Now, to ease me into discussing more technical things on this blog...

I read an article on the BBC website today that talks about a good use for the hum you get on your recordings, which I found interesting (although from a sound recording perspective, I still don't like that bloody hum).

Mains hum is a frequency that audio equipment picks up for a variety of reasons. In most of the world (the blue-coloured countries on the map below), the hum is emitted at 50Hz.



However, a quote from Dr. Alan Cooper in the BBC article explains "because the supply and demand [of electricity] is unpredictable", each day there are different fluctuations in the amount of energy distributed over the National Grid which cause small fluctuations in the tone of the hum. Because there's one National Grid to supply electricity to the entire nation, the hum throughout Britain is exactly the same frequency at the same time, wherever you are.

When recordings are played back as evidence in court hearings, people might claim that the recording has been edited to change the meaning of what's been said. Thanks to the hum, a technique called Electric Network Frequency (ENF) analysis can now be used to determine whether or not recordings have been altered because, as the fluctuations in frequency are unique to each day, the hum extracted from the recordings can be compared to the recordings of the hum that the Metropolitan Police have been capturing for the last seven years. If the hum in both recordings matches, the recording hasn't been tampered with. If it doesn't match, it's evidence of manipulation and can't be used as evidence in court.

Tuesday 11 December 2012

File management

One issue I faced in the last project was that I often imported a file to Soundtrack Pro to test a sound out and then, without copying it into the right folder on my hard drive or naming it appropriately, using it in the final mix.

This meant I wasted some mixing time relocating files and meant presenting via the network was messy.

For this project, I have decided to take the 'category-based file name' model of media management Ric Viers suggests in his book 'The Sound Effects Bible'.

An example of how I named files at my most organised point in the last project would be: "paper rustle 1", which was followed by "paper rustle 2" all the way up to "paper rustle 5". These files were in a folder I'd designated to foley recordings, and atmoses I kept in a separate folder.

Having files in different places risks more issues as there's no single place to tell the software to look if the sounds need relocating, as they did when I copied my project across to the network to present my work.

However, with the category-based system, I would save each file into the same folder and structure its name like this:
FOL_PAPER_RUSTLING_01
The "FOL" will replace the dedicated folder for foley recordings, and different types of sound will be differentiated with different categories. So "corridor atmos" would become:
AMB_CORRIDOR_AIR-CON

Hopefully this will help me be more organised and save me time in my current project, and will also mean the recordings will fit nicely into any larger sound library I might put together in the future.

I'm also planning to give myself a more strict colour-code to follow when editing and mixing the sound. This will help me group different types of sound as the mixer becomes filled with more tracks, making it easier to find any particular track when I want to.

Reading about actual sound design for No Country For Old Men

For Exercise 1, I chose a film I hadn't seen before because I wanted to approach the sound design with an open mind, rather than either imitating or trying hard not to imitate the original sound design for the film. Having finished the exercise now, I've decided to look into the actual sound design used in the Coen Brothers film - though it's worth noting I still haven't seen the film (it's on my Christmas list).

In an article for the New York Times, Dennis Lim says of the film: "It is not a popcorn movie. Which is to say, it is especially ill-suited to the crunching of snacks or the crinkling of wrappers or any of the usual forms of movie-theater noise pollution". He goes on to say "In some of the most gripping sequences what you hear mostly is a suffocating silence. By compelling audiences to listen more closely, this unnervingly quiet movie has had the effect of calling attention to an underappreciated aspect of filmmaking: the use of sound".

From this extract, I can take that I at least took a similar approach in my own sound design for the clip, which was a minimalistic one.

A block quote from the film's composer Carter Burwell's blog discusses the clip I worked on for Exercise 1:

"There is at least one sequence in “No Country for Old Men” that could be termed Hitchcockian in its virtuosic deployment of sound. Holed up in a hotel room, Mr. Brolin’s character awaits the arrival of his pursuer, Chigurh. He hears a distant noise (meant to be the scrape of a chair, Mr. Berkey said). He calls the lobby. The rings are audible through the handset and, faintly, from downstairs. No one answers. Footsteps pad down the hall. The beeps of Chigurh’s tracking device increase in frequency. Then there is a series of soft squeaks — only when the sliver of light under the door vanishes is it clear that a light bulb has been carefully unscrewed."

This shows how the sound is used to tell the story in the film (and how it can be employed to tell story in film in general): The "distant noise" prompts a response from "Mr. Brolin's character", but we don't see a chair being scraped or who is responsible for the footsteps, but these sounds explain exactly what is happening, yet because we are only experiencing half of it - hearing and not seeing, and so not seeing who is lurking around outside - the tension is built much more effectively.

Often I feel that music is misused in film and television. Some films wouldn't work well without music - Michel Gondry's Eternal Sunshine Of The Spotless Mind wouldn't have the same tone or emotion to it; The Lord Of The Rings wouldn't have the same 'hugeness' to it - and others wouldn't work with it - Michael Haneke's Funny Games only uses music to juxtapose different types of characters, and otherwise plays on silence to build an atmosphere.

Burwell clearly took the latter approach. On his blog he writes:

"The film is the quietest I've worked on... It was unclear for a while what kind of score could possibly accompany this film without intruding on this raw quiet. I spoke with the Coens about either an all-percussion score or a melange of sustained tones which would blend in with the sound effects - seemingly emanating from the landscape. We went with the tones."

The Randy Thom-Forrest Gump video on Youtube

In this video on Youtube, Randy Thom (sound designer and re-recording mixer for Forrest Gump) talks about the process of creating sound for the ping-pong sequence in the film Forrest Gump.

According to Thom, "all of the ping-pong sounds that you hear in the movie were done in post-production" because "none of the sounds that were recorded when the camera was rolling were useable".

As simple as this might sound, each of the ping-pong sounds were recorded individually and the sound recordists "used various kinds of paddles", hiting the balls "off various kinds of surfaces".

When recording foley for Sound Exercise 1, I took care to make sure each sound was recorded on the same surface as seen on screen. However, Randy Thom and the sound recording team for Forrest Gump took the mindset that "we're going for what the best sound is, not necessarily literally what you're seeing on the screen" - yet all the sounds in the sequence sound appropriate for the surfaces we're seeing on-screen. This is a good reminder of how flexible sound recording can be.

Another technique I've been trying to employ for the current sound exercise has been to record sounds in sync with the picture, rather than recording sounds separately and creating sync in editing software. This is largely to save time in the edit, but also because I have a tendency to record too few sound effects and have to repeat sounds in the edit.

This wasn't the method used on Forrest Gump - because they recorded each hit separately, meaning, Thom says, "editors worked through the night to cut each one of those hits in sync", which, he says, "was a huge job".

The Youtube video:

Thursday 6 December 2012

Lord Of The Rings' sound design

There are a series of videos on Youtube (embedded below) based around the sound design for the  first two films of the Lord Of The Rings film trilogy.

The studio used to record foley and edit/mix the sound was purpose-designed for the sound team to have access to the best and most appropriate equipment available. This was done due to the size of the job of designing sound for the films - the soundtrack is responsible for making the invented world of Middle Earth seem real, but, unlike Star Wars, the sounds all have to be rooted in nature, as Middle Earth is based on Earth centuries before industry and the technology it brought about.

According to the interviews conducted with the sound team, almost every sound in the film was recorded in post - even the dialogue was almost entirely ("about 98%") recorded in ADR. This was because the studio used for filming was in close proximity to the main airport in Wellington, meaning the sound was constantly interrupted by air traffic.

Although Middle Earth is based on Earth, is many mythical elements and many invented creatures. An example of an invented creature is the Cave Troll seen in The Fellowship Of The Ring. The creature had to sound big, strong and frightening, and in the end its voice (which we hear through growls and roars) was created by processing recordings of real animals' growls and roars. To add authenticity to the sound of the cave troll, the technique of 'worldising' was used - the recordings of its growls were played back on speakers set up in tunnels on the outskirts of Wellington, and recorded "in this natural environment that is similar to the environment he [the Cave Troll] is in in the movie".

A less realistic creature in The Fellowship Of The Rings is introduced shortly after the Cave Troll is slain. When considering the sound design for the Balrog, Peter Jackson told the sound team "This is not a physical creature. It's basically shadow and flame." This meant that the sound designers had to decide how a giant creature made of 'shadow and flame' might sound, and so to match the visual representation they decided to give it a "natural, organic, rocky feel". The growl of the Balrog was created by processing the recording of a breeze block being scraped across a wooden panel, as the sound team felt it should sound rocky and lava-like.

A new and different type of creature was introduced in The Two Towers in Treebeard, essentially a talking, moving tree. John Rhys-Davies (who also plays the character of Gimli) was chosen to voice Treebeard, but obviously his voice-acting talent on its own couldn't make his sound tree-like, and so a specially-designed wooden cabinet was built with many ways for the sound to resonate when played through speakers in the cabinet, and this was recorded to produce a large, resonant, wooden quality to his voice, which was further processed digitally (judging from the way it sounds in the film).

The Two Towers introduced more difficult scenarios than the invented characters. The sound team were faced with the challenge of creating the sound of a ten-thousand-strong Uruk-Hai army chanting and marching. One way to create these sounds would be to record a large group of people performing the sounds and multiply the recordings in the mix, but the sound designers went for a more authentic sound. During the half-time of a cricket match in New Zealand, Peter Jackson and the sound recordists went on the pitch and recorded the crowd chanting phrases in Black Speech (the language spoken in Mordor, home to the 'baddies' of Middle Earth). The sound of marching was created by David Farmer, the sound designer for the film trilogy, who layered "tracks of volcano rumbles and things like this" and created the rhythm of the army marching by performing volume swells with the faders on the mixing desk. This created a good weight to be enhanced with foley sounds of metal armour clanking and feet stamping on soil.

The five videos I've taken this information from:


*** This video has embedding diabled, but the link is: http://youtu.be/RMNwotOm27g

Wednesday 5 December 2012

Reflecting on Exercise 1

I learned a few things from presenting my work on Thursday. Having the opportunity to watch other people's projects and see how they compared with my own, it was very clear to me that I didn't mix loudly enough (although the air-con in the edit suite didn't help much).

This came up in the feedback I was given, and I was given a guide for levels to aim for when mixing for film:

- Dialogue should average somewhere between -20dB and -12dB (giving a lot of dynamic range, since voice is a very dynamic element)
- Atmoses, etc., should average a level below -20dB
- Anything below -30dB is generally too quiet, as it won't be heard during playback

Ron also explained how to 'calibrate' a mixer and recorder set-up. When using the Wendt mixer with a Marantz recorder, the '0dB' reference tone from the Wendt should measure at -18dB on the Marantz's meters. This is because of the difference between digital and analogue meters, which is a topic I plan to research in more depth.

The feedback given was helpful in giving an idea both of what worked and what else I could have done for this project. For example, Ron noted that there was an opportunity, and perhaps a need, to use a musical element at the point in the clip where the camera zooms into a close-up on the character's face, as this is a turning point within the context of the scene and film.

Aside from the issues with the mix, the feedback I was given told me that the mix itself, while too quiet, was well-balanced, and Neil said that sounds such as footsteps and props being placed down sounded as though they were recorded on the same type of surfaces as we see in the picture. I recorded thing such as footsteps and the case being put down on the floor on both carpet and on wood surfaces to make sure I had the right sounds in the edit.

Overall, I have taken into account the feedback I was given, and also feel that I could have included more detail in the soundtrack - in particular, I think I should have recorded more for the 'cloth' track, as this is an important and often overlooked detail of foley work. In my research, I've learned how important professional sound recordists consider this to be.

Thursday 29 November 2012

Recording in awkward places

While eating my breakfast yesterday, I was watching some of the 'Rode University' videos on Youtube by Rick Viers, who wrote the Sound Effects Bible and the Location Sound Bible. On the 'Indoor Location Sound Recording' video, Rick Viers shows us how the Rode NT6 can be the ideal microphone for recording in awkward places, such as inside a car, where a shotgun isn't practical and wouldn't necessarily produce the best sound.

According to Rode (and Rick Viers in the video, who's trying to sell us this microphone for Rode):
"The RØDE NT6 compact microphone is specifically designed for difficult mounting applications where the highest quality audio is required."

This microphone is well-suited to recording car interiors mostly because it's small and light (the specifications on Rode's website say the capsule head is only 45mm in length and 42g in weight) and therefore fairly easy to mount in small places. However, another reason this microphone is better-suited to car interiors than a shotgun microphone is its cardioid pickup pattern (the picture below comes from the specifications on Rode's website), which is much less directional than a shotgun microphone, meaning placement, while still important, isn't as tricky to get right as with a shotgun - with which you would have to turn and move the microphone to point directly at whoever is speaking in order to get good and consistent sound.


Finishing up No Country For Old Men

When I added the last foley sounds yesterday, I mixed them on headphones - which by itself isn't ideal, but the edit room I was working in had air conditioning humming all the way through, meaning even on full volume, it was hard to hear the dynamics of the mix through headphones.

Luckily, I was able to spend time in the Protools studio this morning to do a final mix on studio monitors. Some of the sounds I added were too loud, because I'd boosted them to hear them on headphones.

I added footsteps for the character to respond to at the end of the scene, and used a reverb to give them a rough acoustic space of being outside in the hallway, and used EQ to filter out the high frequencies for the same effect. When listening on speakers, this reverb sounded too 'big' for such a small place, so I had to rework the settings and turn down the level of the track to make it sound more believable and subtle.






I also used automation to boost the atmos towards the end of the scene for two reasons:
1) Since the sound is heard from the character's POV, boosting the ambient sounds shows his attention shifts to focus more on his surroundings after he discovers the tracker in the briefcase.
2) With not much else going on in the soundtrack, the sound seemed 'dead' and needed something to fill the space.

Wednesday 28 November 2012

Overall approach to sound for 'No Country For Old Men'

I tried, as I'm sure is what a sound designer is supposed to do, to match the tone of the picture/scene in the sounds I used for this exercise. I decided to use to my advantage the fact that I haven't seen the film yet, and approach the soundtrack how I thought was appropriate, rather than having preconceptions about what sounds needed to be included from how someone else approached the scene.

I decided my approach would be to tell the story from the character's POV. This meant I didn't use as extensive a range of foley as I would have normally (the main sound from the character is footsteps, as this, according to what I've been told about the film by others, is what the character would focus on most).


One way I tried to tell the story from the character's POV was to automate effects such as reverb to reduce in intensity as they would for the character in the film-world. This screenshot shows how I automated the reverb on the key in the door to the motel room to reduce sharply as the character steps from the reverberant hallway into the more acoustically-dead bedroom.


One of my lecturers, Ron, told me that the film relies a lot of silence and what sounds aren't included as much as what sounds are included, and I thought this approach seemed appropriate and fit in with the one I was taking (which gave me confidence in that I must have been on the right track). This again ties in with telling the story from the character's point-of-view - what people have told me about the film is that the character suspects he's being tracked somehow (how, we find out in this scene) and so is listening to every sound closely.

To contrast this, I tried to add a sense of realism through the atmoses - in particular, the first atmos I used in the bedroom (before the character wakes up and opens the case of money) I recorded using the 'worldising' technique. I placed a Rode NT4 in a room in my house and recorded the sound of an American sitcom playing on television in another room of the house. However, this didn't sound convincing enough to sound like someone in another room in the motel was up watching television (which was my intention), and so this recording still needed processing before I came to mix the sound, and I used EQ to ease off the higher frequencies, making it sound more muffled, as though it was heard through a wall.

Aside from my approach to the technical side of things, I decided to record all the sounds for this exercise myself for two reasons:
1) I have experience of spending more time searching through soundbanks for specific sounds than it would generally take to record my own sounds;
2) I wanted to develop my recording skills, as this is something I've typically avoided in the past.

Mixing for 'No Country For Old Men'

Most of the recordings I made for this exercise I had planned beforehand, and considered how to record each sound in terms of location and microphone placement, in order to get recordings that required as little editing as possible before they could be mixed together. However, as I mentioned in my last post, some sounds needed editing before they could be used, as with the paper rustling sound, and some needed layering together before mixing to produce a stronger sound, as with two of the atmoses (the corridor atmos and the outside/window atmos).

Most of the sounds I used have reverb on them - in order to make different sounds and recordings fit together believably into a consistent acoustic space. There are two different-sounding reverbs I used, one for the corridor shots and the other for inside the bedroom. To do this, I copied the settings in Space Designer as closely as I could for each sound that needed reverb. In hindsight, this would have been easier and more CPU-efficient to achieve by using send effects - however, as I'm not as familiar with Soundtrack Pro as I ought to be, I wasn't sure how to do this, and decided not to spend much time learning how to do this if I didn't need to (which I realise is not the best way to learn, especially with technology). Another problem I had because of my lack of familiarity with the software was that I struggled to export a mixdown of the finished sound, despite having done so many times in the past with no problems.

Recording for 'No Country For Old Men'

As well as recording sounds specifically for the sound exercise, for which I've been putting sound to a clip from the Coen Brothers' 'No Country For Old Men', I made use of other recordings I've collected recently, including one I mentioned in an earlier blog post ('Designing sound for a poetic documentary'). I used this recording as the atmos of outside/through the window when the character (whose name I don't know, having not seen the film before) looks outside from his (presumably) motel room, in order to keep consistency with the rest of the scene, in which I tell the story through sound from his perspective.

Another recording I've discussed on this blog that made it into the mix was the vending machine recording I wrote about in the blog post 'Atmos recording'. I edited this recording at home using Adobe Audition, and tried to make the sound as realistic and as flexible as I could. However, because I wasn't editing the sound to the picture I'd be using, when I came to mix it as an atmos, the recording was far too loud and sounded very aggressive, as though I'd recorded from very close to the machines (when, in fact, the track is layered with a recording made close to the machines and another made at the other side of the room to the machines).


To compensate for this in the edit, I reduced the amplitude quite a lot, and added a reverb using Space Designer within Soundtrack Pro, which made the recording sound much more like it was in the acoustic space we see on-screen.

Although I did some work in the Harmer Protools studio, almost all the foley and atmos recordings I used were recorded at home. This is partly down to having more time to record when at home, and also because the mic stand I have at home is more sturdy than the two different ones I booked out from Stores at university - most likely because they get a lot more and much heavier use than mine does.

Recording foley at home added complications, however, because I had to work around unwanted noises like traffic noise from outside (whichever room of the house I went in at whatever time of day, this was an issue, apart from late-night recording, which wasn't always practical due to getting up for university or work), and also sound of neighbours and people passing by the house when I recorded near the front of the house, and wildlife sounds from the back of the house.

I compiled a list of foley that needed recording, which I followed for most of the sounds, recording each sound separately and then checking that it matched up with the picture, although some sounds, in particular the rustling of paper towards the end of the scene, I recorded whilst playing back the video in order to help synchronise sound and picture.


Although, as this screengrab shows, the recordings didn't always synchronise properly with the picture anyway, and had to be arranged in Soundtrack Pro to sound right.

Sunday 11 November 2012

Atmos recording

For the sound-to-picture exercise, I've been recording atmoses to put to the 'No Country For Old Men' clip. From what I've learned in workshops, and also from reading Ric Viers' 'Sound Effects Bible' religiously (get it?), I decided to use the technique of 'worldising' for one atmos. I set a Rode NT4 up on a mic stand, which I placed roughly head-height of someone sitting down, and recorded the sound of a television playing from another room. This I thought would fit for the apartment (having not seen the film, I don't know exactly where the clip is set), as it's likely that late at night, someone nearby will still be awake and watching television. For authenticity, I put The Big Bang Theory on, being one of the only American programmes on at that time.

Later, I took the Rode to the Adsetts building to record an atmos for the corridor. I placed the Rode in two different places, one at one end of the entrance to the building, and the other near to the vending machines, again as this is something I'd expect from an apartment block/hotel in the USA. The recording sounds good, but has had to be edited quite a lot because, even though I was recording at 10pm on a Wednesday night, a lot of people were still walking to and from the Learning Centre - I'll try to find less-busy locations from now on.

I bumped into a friend while I was setting up, and he found it funny enough to take a picture and upload it to Facebook:


Luckily, though, he's also given me a production photo. I had the microphone facing away from the machines because it gave the sound closest to what I was looking for. Facing the machines didn't sound ambient enough, and in the clip, the camera is facing away from what I imagine to be a corridor with vending machines and/or air-con.

Designing sound for a poetic documentary

For another module, I'm doing the sound for a 'poetic documentary' exercise. Our film is themed around isolation, and so I've decided to try to come up with the closest thing to a 'lonely' soundscape I can.

For one atmos, I went out on two separate nights with a pair of tie-clip mics and hung them inside grates on different roads. The first night I went out, there was a lot of wind, resulting in quite a loud deep rumbling sound, but the wires for the microphones were also blown about by the wind, meaning there was also sound of the wires hitting the metal grades. The second night there was less wind, but to be safe, I went to a road that was better sheltered from the wind, and recorded the rumbling sound (and occasional slash) of inside the grates.


In Adobe Audition, I've been layering different recordings together and editing some to produce a wind-like sound that has a strong low-frequency rumble.


 I added some fairly extreme EQ to one track to produce a wind-like 'howling' sound.


I then added a delay to add some ambience to make the track fit better with the other sounds, and also to add more to the 'howl' sound. This delay also adds a certain rhythm and otherworldly feel to the overall sound, but is for the most part masked by the other sounds, making this effect both subtle and, in a way, stronger.

Saturday 3 November 2012

A film designed for sound

I recently watched 'A Fantastic Fear Of Everything'. Having already seen the film when it was on at the cinema, I payed more attention to the technical side of the film, and the sound design in particular.

For the first half of the film, the story is told mainly through narration by Simon Pegg's character, Jack. The narration is told as though Jack is narrating for his own entertainment. Similarly, the sound in this first half - partly because most of the time, Jack is the only character in the scene, and partly for dramatic effect - is heard from Jack's perspective.

In these scenes, there tends to be focus on individual sounds, such as creaking sounds as Jack is paranoidly convinced someone is creeping around in his flat. These sounds get louder and more layered to build up the emotional tension in the scene. Just like Jack normally loses focus and is brought back to his senses, there is usually a disruption in the soundtrack before we are brought back to hearing real-world sounds, such as the sound of rain outside Jack's window.

Having read Randy Thom's article, 'Designing A Movie For Sound', I noticed that the script was clearly written with a strong consideration of sound. The most obvious example of this is a scene in which Jack walks into his bedroom as psychedelic music begins to play, and he screams (diegetic sound is removed, leaving only the music in the soundtrack) in time with the vocals. It's also clear from how well-constructed the soundtrack is, especially in the first half of the film, and how reliant the picture is on sound to help form a strong and coherent mood.

Thursday 18 October 2012

Eraserhead and Star Wars

After watching parts of David Lynch's Eraserhead (1977) again online, I've noticed how different the approach to sound design is in that film compared to another film from that year - George Lucas' Star Wars. Being two totally different types of film, these show a good example of how different sound can be in film. While Ben Burtt (Star Wars) focuses on making the various invented sound effects fit naturally into filmworld to give the impression that the filmworld is real, David Lynch focuses on abstract sounds through out Eraserhead to do exactly the opposite - the intensification of sounds that would normally fit fairly low in the atmos track not only symbolises the dullness of the filmworld, but also adds to the film's surreal sense of otherworldliness.

Sunday 14 October 2012

How the sound of music has changed through the years

Earlier today, after listening to Led Zeppelin III (1970) in its entirety, I started listening to The Resistance (2009) by Muse. The difference between the sound of these two albums, despite both being by rock bands, made me realise just how much the processing of audio transformed in the 39 years between the two albums' releases.

Led Zeppelin III manages to combine sounding rock enough for the likes of Led Zeppelin with being easy enough on the ears to not be tired out by the time the album starts to repeat (it played one and a half times through before I changed to Muse), whereas The Resistance, while sounding as good as any other album released in the last few years, sounds aggressive not just musically, but in its production - there are more harsh high-frequencies and overall is compressed in such a way as to sound 'loud' and 'punchy' when played back. This is a complaint that's often made about modern music versus older music.

I listened to both albums digitally, both using the same file formatting, which shows that contemporary music goes through the mixing and mastering processes in a very different way to music from the 1970s.