Friday 21 December 2012

Final Chapter: Evaluation

Since I became interested in film-making, I’ve had a strong interest in particular in sound for film. This is because I had played music for some years and in recent years have been increasingly interested in the more tech-based recording and mixing side of music-making. I have always enjoyed “getting lost” in editing and mixing recorded and synthesised music, because of the freedom to be creative and that there are no “right” or “wrong”.
    Despite this, before this module I had very little experience and no confidence in recording audio. The lack of confidence was probably a consequence of the lack of experience in this field. As a beginner, being aware of the various issues sound recordists face - unwanted noise/noisy environments, the art of microphone placement, mic/pre-amp noise, etc. - can give the impression that sound recording is a very scientific and precise, and ultimately difficult thing to do.

Over this module, my impressions on sound recording have changed as I’ve learned more and gained more experience. I have learned that a certain joy can be had from experimenting with different recording techniques and equipment, and that often the best results are the most unexpected ones and tend to occur the more creative you get in your recording, and the more you have while doing it.
    I learned this during a workshop earlier-on in the module, in which we were given a selection of microphones and told to choose two different types and spent a set amount of time collecting sounds with our choices. I chose the Rode NT4 and a pair of Aquarian Audio hydrophone mics. I decided to place the two hydrophones onto the sides of an escalator using the rubber contact-mic converters. This produced an interesting low-frequency rhythmic and pulsating sound. This made me realise that these contact-mics were, in a way, able to capture sounds in ways that that our ears would never pick up, because they pick up sound in a different way to human ears - however, I was brought back to the real world when I was told by a member of staff in the place I was recording that I had to stop because, understandably, the microphone cable trailing across the top of an escalator was a health-and-safety concern.
    I came to appreciate real-world recording with the Rode NT4 shortly after, when collecting atmos recordings for my documentary module, which required me to find somewhat remote locations and record, which I did at night-time to minimise the number of disruptions. I have found that atmos-track recording can be quite relaxing as it forces you to pay attention to self-made sounds, and sounds around you - which is something that people often forget due to the distracting nature of many modern technologies, such as MP3-players.

I chose the Coen Brothers’ ‘No Country For Old Men’ for the sound-to picture Exercise 1, because it appeared to be the most straight-forward and the least “daunting” choice for my first attempt at a sound-to-picture exercise. I feel that this was both a good choice and a bad one. I gained experience and confidence from recording atmoses and foley sounds for the piece, and gave consideration when preparing the atmos recordings - to create the sound of an American hotel room, I recorded the sound of an American sitcom playing in a different room to the microphone, using the technique of worldising, which is the process of ‘playing back existing recordings through a speaker or speakers in real-world acoustic situations, and recording that playback with microphones so that the new recording so that the new recording takes on the acoustic characteristics of the place it was “re-recorded”’ (FilmSound.org). However, due to the minimalistic nature of the film in relation to its sound requirements, I feel I didn’t push myself very much to develop wider sound design skills such as creating and making use of designed sound or music/musical elements.
    Nevertheless, I feel I learned from the experience gained from this exercise and from the feedback given both from other students and from my lecturers. I had mixed most of the sound in the Protools studio on fairly loud speakers. This led to the levels of the sounds in my mix peaking well below -20dB, which meant the playback was very quiet when I presented the work (although this at least shows that I was mixing with my ears according to what the speakers were feeding me, rather than mixing with my eyes being fixed to Soundtrack Pro’s VU meters). From the feedback given, I have learned that the correct levels to mix sound for film at are: between -20dB and -12dB for dialogue, and below -20dB for atmoses and ambient sounds.
    I also learned from the feedback I was given that most of my sounds underwent very little processing to enhance the tone of the sound design. This is largely because of the minimalistic approach I took, and while this was deliberate, it probably didn’t enhance my skills as much as it would have had I been more creative in my approach. After completing Exercise 1, I watched the scene from ‘No Country For Old Men’ online and was able to reflect on the storytelling-related details in the original sound - such as the sound of someone offscreen unscrewing a lightbulb, which in the film is a significant, but subtle, touch. However, being unfamiliar with the film, I wasn’t able or required to have this level of input in telling the story of the scene. Overall, this exercise helped me improve my skills and develop a better and deeper understanding of the processes that go into film sound.

After lots of consideration, taking into account what I had learned from Exercise 1, I chose the clip from Jean-Pierre Jeunet and Marc Caro’s ‘Delicatessen’ for my Major Project. As well as what I’d learned from Exercise 1, I was inspired to make this choice from a workshop during the module in which we collectively went over each of the choices, comparing the sound requirements of each, and also by the strong and moody visual style of the clip. I felt that this clip would give me an opportunity to combine the foley techniques I developed earlier in the module with new approaches to sound, as the opening sequence clearly required a strong designed soundtrack to match the picture.
    To match the postapocalypse-esque wasteland seen in the opening sequence and the very strongly orange-coloured visuals, I used a sound I had originally recorded but ended up not using for a poetic documentary I made for another module. The sound came about from recordings I made while holding a pair of tie-clip microphones down roadside grates on a windy night. The sound of the wind resonated and reverberated within the drains, and the recordings alone produced a powerful low-frequency “rumble”, which I later processed in sound editing software, the process of which I outlined on my blog for this module. Taking feedback from Exercise 1 into account, I processed this “rumble” to sound something like wind, which I achieved by boosting a particular frequency using Soundtrack Pro’s channel EQ and using automation to boost the gain of this frequency over time, and also to change the frequency to give the feeling of movement within the sound (evidenced in the following screenshot).
    In order to quickly set the tone of the clip immediately, I combined this wind sound with a series of chords played on a synthesised organ in the music software Reason. Only having basic understanding of chord structure, I chose to progress from a minor-sounding chord to a major chord and then to a dissonant-sounding chord, in part to create a sense of drama, as major chords tend to sound “happy”, which is then contrasted by the “scary-sounding” dissonant chord, shortly followed by the sound of a knife being sharpened. The “cheesiness” of this music was also intended to add a slightly humorous element to the overall mood to match the dark humour seen in the clip.
    The sound of the knife being sharpened preceeds the introduction of the Chef or even the kitchen setting in order to add a sense of unease, as hearing before seeing typically invokes a sense of curiosity in the audience, which in this case is intended to encourage the audience to question why a knife is being sharpened in that desolate landscape. Again, learning from feedback from Exercise 1, the knife sound is “drenched” in reverb, largely to add the sense of horror that comes from exaggerated and somewhat unrealistic sounds.
    In contrast with Exercise 1, for which I chose to explore the sound from the main character’s point-of-view, I decided to follow the camera’s point-of-view for this task, and so added details such as being able to hear the Chef breathing heavily as the camera passes over his shoulder.
    Because the knife sound has to be heard when we enter the (for lack knowing the character’s real name) Victim’s bedroom, the reverb presented a potential problem as the camera moves through pipes, as this would typically be where reverb naturally begins to overwhelm the original sound. To compensate, I added a stereo delay effect and automated the mix balance (as evidenced in the screenshot to the left) to add more delay the further the camera progresses through the pipes. This also served the purpose of speeding up the pace at which the knife appears to be sharpened, as when, later in the scene, there is a cut back to the kitchen, the Chef is sharpening much more quickly than when we last saw him. The feedback I was given when the work was presented suggested that the reverb should have been introduced as the camera approaches the pipes, but I feel that the reverb is effective in keeping the overall tone of the opening sequence consistent with the intended emotional response, and that the delay is effective in adding the sense of hearing the sound from inside the pipes.
    I also experimented when creating the sound of the bin lorry. This was composed of three different recordings, which were each EQed to fit together without “clashing” due to overlapping in sensitive frequency ranges. The first sound introduced is taken from a recording made using two contact mics on the sides of an escalator, which was intended to act as the low-frequency engine sound of the lorry - however, because of the rhythm the escalator gave to the recording, this also worked well as a musical element to help the sequence beginning in the bedroom and ending when the Victim is shown to be hiding in the bin flow and add a sense of suspense to the soundtrack.
    I then added recordings of boiler rooms which I collected when I was allowed access to plant rooms inside Sheffield Children’s Hospital to both serve as a piercing high-frequency sound, and as a mid-range-frequency used to show the movement of the lorry, which cuts out when the driver brakes. The high-frequency sound is introduced before the scene moves outside, and this is largely because, when mixing I realised I hadn’t recorded any sound to use when the Chef runs his fingers along the knife. However, I feel this worked in my favour, as I was able to use this sound to represent this as well as to introduce the bin lorry before it appears on screen. Again, my approach to creating the sound of the bin lorry was centred on creating something that sounded unreal and, hopefully, scary, rather than using the sound of a real engine, as this I felt would be too mundane and wouldn’t keep consistency with the soundtrack.
    The last new sound effect (“new” meaning a sound that hasn’t already been introduced in the project) used for my Major Project was the “thud” sound as the Chef swings the knife towards the Victim, just as the title appears on-screen. This was created using a pitched-down recording of a bag of compost being dropped onto the floor of a shed. The sound alone, even when pitched-down and EQed, didn’t have enough impact. To solve this, I applied a compressor to add “punch” to the sound. I began the compression by setting the threshold low and using the attack and release controls to time the compression appropriately. I followed a guide written to explain how to set-up a compressor for a snare drum and shortened the attack setting until the sound began to dull (Owsinski 2006, p.62) and then ease the threshold back until the thud had the right level of impact.

Overall, I feel I have progressed a lot throughout this module and have used the skills taught to good effect on my Major Project.



References
- FilmSound.org. Worldizing: A Sound Design Concept Created By Walter Murch. [online] Available at: http://filmsound.org/terminology/worldizing.htm [Accessed 20 December 2012].

- Owsinski, B., 2006. The Mixing Engineer’s Handbook. 2nd ed. Boston: Thomson Course Technology

Thursday 20 December 2012

Metering: Digital and Analogue

I noticed that the meters on two pieces of equipment I have used on this module are quite different to each other, and so I decided to research into metering to find out what the difference is, and what the need for the differences might be.

An article on the website Sound On Sound gives a good introduction to the two types of audio meters and what their differences are.

The simpler meter I have encountered is called a VU (Volume Unit) meter on the Wendt X3 mixer. The article explains that this was an early piece of metering technology, and the following quote has helped me understand how a VU meter works and why it might not be the most reliable way to check your levels when recording in the field:
Because the VU meter measures 'average' levels, a sustained sound reads much higher than a brief percussive one, even when both sounds have the same maximum voltage level: the reading is dependent on both the amplitude and the duration of peaks in the signal. In addition, the standard VU response and fallback times (around 300 milliseconds each) exaggerate this effect, so transients and percussive sounds barely register at all and can cause unexpected overloads.
 The other type of audio meter is that found on the Marantz PMD661 recorder. I've looked at specs online, but can't seem to find what type of audio meter this recorder has. However, the Sound On Sound article tells me that:
"The majority of digital recorders, mixers and converters therefore use true peak-reading meters whose displays are derived from the digital data stream. As these don't rely on analogue level-sensing electronics they can be extremely accurate."
In a sound & camera workshop during this module, Ron explained that the 0dB reference tone the Wendt mixer can send to a recorder should meter at around -18dB on the Marantz. According to this information, this is because 0dB would measure very differently on the Wendt's VU meter to how it could on the Marantz's "true peak-reading" meter. This is important to know because, as the article explains, with digital recorders such as the Marantz, there is no headroom to rely on - if a sound peaks at or above 0dB for any time at all, digital clipping immediately distorts the signal for as long as the sound peaks. Unlike the various types of analogue distortion people often claim add a nice character to music, such as valve overdrive or tape compression, digital clipping is a harsh-sounding type of distortion that is never desirable on a recording, as this distortion cannot be removed as it is, in a sense, the sound of missing digital data.

Editing and mixing sound for a documentary

For a documentary produced in another module, I took the job of sound editor/mixer and part-sound recordist (I operated camera on the first interview conducted, and so wasn't able to record sound, and I wasn't able to attend the second interview, but I did record atmoses and music).

This was a relatively straightforward process, as the documentary was constructed purely of interviews, and little space was left in the edit in which there wasn't speech, meaning my job was mainly to tidy up the interview recordings, balance the levels and do as good a job as I could to make the two interviews consistent in how they sounded (the first was recorded with a tie-clip microphone and the second with a shotgun - and both were recorded by different people).

This meant I had a lot of work to do in automating the levels of each track containing audio that would end up in the final mix. This mostly meant crossfading between the two interviews and the atmos tracks I "attached" to each.

I was asked by my director to either find or write and record some music that would be appropriate for the film. I began tweaking a piece of music I wrote some time ago, but was told it sounded "too sci-fi" as it was synthesised and the director wanted music that would fit the picture more naturally. In the end - as I was asked to do this whilst in university on the Friday afternoon (and the deadline was Monday morning), and I also had to sit with the editor to give a second opinion when needed - I didn't have time to write or to record new music that fit the requirements I was given, so instead I re-used a piece of music I recorded for a project I was involved in over Summer (embedded below).

I had recorded the music on an acoustic guitar and then recorded a counter-melody on an unplugged electric guitar, which resulted in a "tinny"-sounding effect. This presented a problem for the documentary, as the music was most resonant around the 900Hz-6Hz frequency range, which is shared by the human voice, meaning the music clashed with the dialogue.

I was also asked to make the music sound as though it could be a busker in the background of the recordings, which led to my decision to process the music as such:


...and to keep the music at a very low volume in the mix.

The last piece of interview sound used in the documentary was of a homeless man telling the story of his attempted suicide. For this part, I felt the sound needed to reflect the delicate subject matter, and so I made the decision to fade out any unnecessary sound (atmos tracks and music) and leave only the interview recording playing. For this I took inspiration from a scene David Lynch's 'Lost Highway', in which all diegetic sound is either reduced or removed completely to pull the viewer's attention to a conversation between two men. The effect of doing so adds an uneasy level of intimacy to a film, and in the case of my documentary, hopefully had the effect of  drawing the audience's attention subtly but very firmly towards the seriousness of what's being said.

This then provided a nice transition to the last thing we see on-screen in the film:


To accompany this and hopefully end the film on a poignant note, I reintroduced an atmos track of city traffic and faded in the ending bars of the music, letting the final chord fade out as the film ends.


The scene from 'Lost Highway' which inspired my decision:

Wednesday 12 December 2012

Why hum on your recordings can be good

Now, to ease me into discussing more technical things on this blog...

I read an article on the BBC website today that talks about a good use for the hum you get on your recordings, which I found interesting (although from a sound recording perspective, I still don't like that bloody hum).

Mains hum is a frequency that audio equipment picks up for a variety of reasons. In most of the world (the blue-coloured countries on the map below), the hum is emitted at 50Hz.



However, a quote from Dr. Alan Cooper in the BBC article explains "because the supply and demand [of electricity] is unpredictable", each day there are different fluctuations in the amount of energy distributed over the National Grid which cause small fluctuations in the tone of the hum. Because there's one National Grid to supply electricity to the entire nation, the hum throughout Britain is exactly the same frequency at the same time, wherever you are.

When recordings are played back as evidence in court hearings, people might claim that the recording has been edited to change the meaning of what's been said. Thanks to the hum, a technique called Electric Network Frequency (ENF) analysis can now be used to determine whether or not recordings have been altered because, as the fluctuations in frequency are unique to each day, the hum extracted from the recordings can be compared to the recordings of the hum that the Metropolitan Police have been capturing for the last seven years. If the hum in both recordings matches, the recording hasn't been tampered with. If it doesn't match, it's evidence of manipulation and can't be used as evidence in court.

Tuesday 11 December 2012

File management

One issue I faced in the last project was that I often imported a file to Soundtrack Pro to test a sound out and then, without copying it into the right folder on my hard drive or naming it appropriately, using it in the final mix.

This meant I wasted some mixing time relocating files and meant presenting via the network was messy.

For this project, I have decided to take the 'category-based file name' model of media management Ric Viers suggests in his book 'The Sound Effects Bible'.

An example of how I named files at my most organised point in the last project would be: "paper rustle 1", which was followed by "paper rustle 2" all the way up to "paper rustle 5". These files were in a folder I'd designated to foley recordings, and atmoses I kept in a separate folder.

Having files in different places risks more issues as there's no single place to tell the software to look if the sounds need relocating, as they did when I copied my project across to the network to present my work.

However, with the category-based system, I would save each file into the same folder and structure its name like this:
FOL_PAPER_RUSTLING_01
The "FOL" will replace the dedicated folder for foley recordings, and different types of sound will be differentiated with different categories. So "corridor atmos" would become:
AMB_CORRIDOR_AIR-CON

Hopefully this will help me be more organised and save me time in my current project, and will also mean the recordings will fit nicely into any larger sound library I might put together in the future.

I'm also planning to give myself a more strict colour-code to follow when editing and mixing the sound. This will help me group different types of sound as the mixer becomes filled with more tracks, making it easier to find any particular track when I want to.

Reading about actual sound design for No Country For Old Men

For Exercise 1, I chose a film I hadn't seen before because I wanted to approach the sound design with an open mind, rather than either imitating or trying hard not to imitate the original sound design for the film. Having finished the exercise now, I've decided to look into the actual sound design used in the Coen Brothers film - though it's worth noting I still haven't seen the film (it's on my Christmas list).

In an article for the New York Times, Dennis Lim says of the film: "It is not a popcorn movie. Which is to say, it is especially ill-suited to the crunching of snacks or the crinkling of wrappers or any of the usual forms of movie-theater noise pollution". He goes on to say "In some of the most gripping sequences what you hear mostly is a suffocating silence. By compelling audiences to listen more closely, this unnervingly quiet movie has had the effect of calling attention to an underappreciated aspect of filmmaking: the use of sound".

From this extract, I can take that I at least took a similar approach in my own sound design for the clip, which was a minimalistic one.

A block quote from the film's composer Carter Burwell's blog discusses the clip I worked on for Exercise 1:

"There is at least one sequence in “No Country for Old Men” that could be termed Hitchcockian in its virtuosic deployment of sound. Holed up in a hotel room, Mr. Brolin’s character awaits the arrival of his pursuer, Chigurh. He hears a distant noise (meant to be the scrape of a chair, Mr. Berkey said). He calls the lobby. The rings are audible through the handset and, faintly, from downstairs. No one answers. Footsteps pad down the hall. The beeps of Chigurh’s tracking device increase in frequency. Then there is a series of soft squeaks — only when the sliver of light under the door vanishes is it clear that a light bulb has been carefully unscrewed."

This shows how the sound is used to tell the story in the film (and how it can be employed to tell story in film in general): The "distant noise" prompts a response from "Mr. Brolin's character", but we don't see a chair being scraped or who is responsible for the footsteps, but these sounds explain exactly what is happening, yet because we are only experiencing half of it - hearing and not seeing, and so not seeing who is lurking around outside - the tension is built much more effectively.

Often I feel that music is misused in film and television. Some films wouldn't work well without music - Michel Gondry's Eternal Sunshine Of The Spotless Mind wouldn't have the same tone or emotion to it; The Lord Of The Rings wouldn't have the same 'hugeness' to it - and others wouldn't work with it - Michael Haneke's Funny Games only uses music to juxtapose different types of characters, and otherwise plays on silence to build an atmosphere.

Burwell clearly took the latter approach. On his blog he writes:

"The film is the quietest I've worked on... It was unclear for a while what kind of score could possibly accompany this film without intruding on this raw quiet. I spoke with the Coens about either an all-percussion score or a melange of sustained tones which would blend in with the sound effects - seemingly emanating from the landscape. We went with the tones."

The Randy Thom-Forrest Gump video on Youtube

In this video on Youtube, Randy Thom (sound designer and re-recording mixer for Forrest Gump) talks about the process of creating sound for the ping-pong sequence in the film Forrest Gump.

According to Thom, "all of the ping-pong sounds that you hear in the movie were done in post-production" because "none of the sounds that were recorded when the camera was rolling were useable".

As simple as this might sound, each of the ping-pong sounds were recorded individually and the sound recordists "used various kinds of paddles", hiting the balls "off various kinds of surfaces".

When recording foley for Sound Exercise 1, I took care to make sure each sound was recorded on the same surface as seen on screen. However, Randy Thom and the sound recording team for Forrest Gump took the mindset that "we're going for what the best sound is, not necessarily literally what you're seeing on the screen" - yet all the sounds in the sequence sound appropriate for the surfaces we're seeing on-screen. This is a good reminder of how flexible sound recording can be.

Another technique I've been trying to employ for the current sound exercise has been to record sounds in sync with the picture, rather than recording sounds separately and creating sync in editing software. This is largely to save time in the edit, but also because I have a tendency to record too few sound effects and have to repeat sounds in the edit.

This wasn't the method used on Forrest Gump - because they recorded each hit separately, meaning, Thom says, "editors worked through the night to cut each one of those hits in sync", which, he says, "was a huge job".

The Youtube video: