Categories
Collaboration (VR)

Recording Foley for Phonebooth VR Game

In the very early stages of designating roles to one another, we decided to visit the foley room as a group in order to take the first draft recordings for the game. We felt that by coming together to complete to the first task, we would gain more clarity on the other dimensions of the game, making it easier to designate roles within the group. In this session we recorded footsteps, mechanical sounds of the phone-box and the sounds of doors shutting using a Sennheiser 416 shotgun microphone. Whilst these sounds may not be used in the final product, if at all, we decided to get the ball rolling. This way we can trial and error sounds with the Games students and rerecord sounds accordingly.

As we are expecting to eventually put the sounds into Unity when mixing down the final product, we decided to record one shots of each sound required as well, in order to keep in line with the nature of working in Unity or FMod.

At Hwyels disposal, we used his vintage telephone mic to sample some recordings of the dialogue. Using this beat-down microphone, we thought, would give the audio an interesting nature. Unfortunately however, the audio was too distorted to be used due to a lack of a powerful enough pre-amp. Nevertheless the distortion gave us ideas on how to possible manipulate the dialogue using audio effects in the final mixdown.

An issue we ran into was the multichannel nature of the recordings as we incorrectly formatted the Zoom F4 that we were using whilst recording. However this was swiftly fixed by Inaki, extracting the separate audio channels on pro tools.

Categories
Collaboration (VR)

Collaborative Project – Phonebooth (VR)

‘A VR Narrative game, addressing the effects of depression and the process of reaching out to people around us. The player will call-up friends and family and make dialogue choices over the duration of their conversations. The consequences of your choices will visibly impact the environment around the player’.

After meeting with the Game students, myself and my classmates, Hywel and Inaki, had a better idea of what they wanted the sonically aesthetic nature of their game to be. Contrasts between a feeling of clarity and tension we discussed could perhaps be portrayed through music and atmospheres. Dialogue could also be warped to display the contrast of emotion too. Simply put, they required atmospheres, foley and dialogue. Music it seems has been composed already, by someone they know, however we may debate this with them in order to add our own compositions that we feel would benefit the game more than the simple single piano melody they had shown us during the meeting.

On an overall basis, the nature of the game seemed to us very sonically barren. The beginning and end of the game takes place in an empty black cavernous space, while the climax takes place in one spot, that being the phonebox. As a result we weren’t able to record much in the way of foley as it seemed like at this point there was not much of it needed. However the space left for a sonic environment meant that immersion through atmospheres and music may have to be the main focus of the game

Categories
Global Sonic Cultures

How Interacting Is Different To Listening: Reflection

In the introduction to Collins’ book ‘Playing With Sound: A Theory of Interacting with Sound and Music in Video Games’ she begins by outlining how passive listening has made up the majority of our perceptions of sound in media. As a result we have a limited scope of terminologies and methodologies with which we can approach the player’s relationship to sound in video games. Drawing on the importance of keeping the ‘player’ in mind when approaching sound in media, she goes on to explore multiple different theories on listening, but also interactivity as a whole in order to help us better understand what interaction entails so that she can seemingly use this a foundational basis to apply theories of sound within interactivity in later chapters.

Using Chion’s categorisation of the three basic listening modes:

Causal Listening: refers to the act of associating a sound to its producing action, whether it be conscious or not

Semantic listening: refers to the act of deciphering or interpreting messages in sounds that are bound by semantics , applied for example in a linguistic sense

Reduced listening: refers to the act of listening the traits of a sound, such as its tone or timbre (its acoustic properties)

She explains how none of these modes of listening are mutually exclusive of one another as the player may be listening in several different ways at once. Nevertheless she explains how these modes of listening, as individual approaches, can change the way a player experiences a game, using the sound of a signal beep in the game Fallout 3 as an example. Using these three modes of listening we can determine where the sound is coming from and what has produced it, what the signal is perhaps trying to tell us and also where on the frequency spectrum the sound lies.

What seemed even more so relevant in regards to interactivity was her expansion on these listening modes, using the musicologists, David Huron’s listening modes that he intended to apply for music. The first being Signal Listening, refers to hearing a sound in anticipation, implying a subsequent action. Using ‘New Super Mario Bros’ she touches on how players must listen to music time their attacks, demonstrating the presence of signal listening and an interaction between player and sound. This mode of listening can also help players determine navigational information, status information and semiotic information.
One of the other of Huron’s modes of listening that I found to be of particular importance was retentive listening, in which we try to remember what we have heard with the intention of repeating it. In a gaming context, an example of this would be when a player is required to actively remember a sequence of sounds in order to carry out a certain task. Collin’s refashioning of Huron’s listening modes in the context of video games helps us understand the intricacies of sound in interactivity on a deeper level.

In regards to interactivity as a concept in itself, she touches on how cognitive/ psychological reactions ‘always occur alongside other interactions in games’, putting it at the centre of all other forms of interaction, be it physical, perceptual, socio-cultural or interpersonal. She states that there is a danger in interpreting the word ‘interaction’ too literally, by equating it to a physical interaction between a user and a media object. Moving on from this she mentions that experimental games such as ‘Alpha World Of Warcraft, have demonstrated that players can use their alpha brain waves to change gameplay, thus further confusing any differences between the physical and the psychological’.
When thinking on how this progression of technology might apply to sound, it becomes a fascinating to consider, especially so when relating it to Collin’s ideas on evoking sounds vs creating sounds. If we are able to create sounds in a game using brain waves, what would the limits be on what could be created, and how would they be implemented? If the limits allowed the player to create unique sounds then would this then place the player as co-creator of aspects within the game? In correspondence with my thoughts, Collin’s too touches on the axis of creator and audience as interactivity increases as a result of technological feats.

She concludes the introduction to her book by talking about a term called ’embodied cognition’. This term theorises that ‘our cognitive processes use reactivations of sensory and motor states from our past experience.’ In order to understand further, a google search of this term gave the definition: ‘Embodied cognition is an approach to cognition that has roots in motor behavior. This approach emphasises that cognition typically involves acting with a physical body on an environment in which that body is immersed’. From this we can infer that sound can be explored in many ways through the medium of mentally reenacting our physical embodied knowledge .

Categories
Global Sonic Cultures

Thoughts On Practical Component

“In ambient music, Eno often imitated or borrowed sounds from existing locations, and organised them in compositions to produce new environments. He often used tape loops of differing lengths played simultaneously so that their interaction randomly produced ‘sound events in periodic clusters’, in much the same way the sounds of frogs, insects and birds in a natural environment occasionally seem to express chords and melodies.” – Oblique Music – pg 90

“Eric Tamm has suggested that texture and timber may be off the essence in the ambient style, but a few general remarks may clarify the style’s use of rhythm and harmony’. He examined thirty ambient pieces, and found that eleven of these dispensed with pulse altogether, the rhythm consisting of a gentle ebb and flow of instrumental colours.”

Harmonically, Eno’s ambient pieces often use static or ambiguous harmonies, sometimes suggestive of chords but just as often consisting of nothing but a drone with… pitches drawn from a diatonic pitch set appearing and disappearing. — Two textural principles – layering and TIMBRAL HOMOGENEITY. These principles were combined so that a typical ambient piece by Eno was composed of 3 to 7 distinct timbral layers.

Categories
Global Sonic Cultures

Ambience & Spirituality

The world of music composition, especially in the USA in the 1960’s has deep connections with spirituality, many composers were influenced and inspired by music and spirituality of the East (primarily India, China, Japan). The use of drones and extended durations is perhaps the most obvious result of this interaction – “The Theatre of Eternal Music delved fully into the acoustical universe of single sustained tones, compounding their deeply droning sound with extended duration, bringing each performer into a unified state” (LaBelle, 2006, p.71), also, about Young – “His music, in a sense, strives for the actualisation of the very perceptual tones, loud volumes, extended durations, harmonic frequencies, all encompass and overarching sonic commitment that seeks to make sound an experiential event beyond the human limits of time and space, exploiting the ear as a physiological device and the mind in its moment of perception of sound stimuli.”, and “Duration for Young is not a question of minutes and hours, but days and years. As Philip Glass proposes – “This music is not characterised by argument and development. It has disposed of traditional concepts that were closely linked to real time, clock-time…” (p. 73)

https://www.rastoropov.co.uk/arts/sound-art/

Tom Murphy: When you were studying Eastern mysticism did you find any connections between what you learned that route and the music around you at the time? How would you describe those connections?

Laraaji: I observed that drone music at that time reflected the sensation of eternal present time which is emphasized in eastern philosophy—the continuum of consciousness. Also deep yogic level relaxation and meditation as reflected in the music of Stephen Halpern. The heightened sensation of bliss and ecstasy as reflected in the music of Iasos at the time in the late 1970’s. Terry Reilly.

How did you turn a zither into an electronic instrument? Was anyone doing anything comparable at the time you started doing that? Did you process those sounds early on or was it more for amplification?

My first autoharp/zither was acoustic. And after exploring alternative tunings I investigated ways to amplify it. [I then purchased] an electric pickup made especially for autoharps. I dove into amplified autoharp/zither research and decided to add sound treatment with the MXR 90 Phase shifter. After recording the album Day of Radiance with producer Brian Eno my interest in other [effects] pedals expanded to include chorus, delays, flangers and reverb.

How did you meet Brian Eno and as a producer how involved was in shaping the sound of Day of Radiance?

Brian introduced himself to me while I was playing Washington Square Park [in New York City in] 1978 and extended the invite to join him in his Ambient album productions. His suggestions to depend more on live studio microphones and Eventide effects, mixing as well as overdubbing a second zither helped to shape the Day Of Radiance sound.

https://queencitysoundsandart.wordpress.com/2019/07/12/ambient-music-pioneer-laraaji-on-sound-and-spiritual-practice-vision-songs-and-laughter-meditation/

There is something primevally grounding and simultaneously mystical about the penetrating 

hum of a drone – whether it be Tibetan deep chant, Japanese gagaku, Scottish pibroch

piping, Aboriginal didgeridoo, or Hindustani classical music. [1]  A lot of this music has spiritual 

connotations and uses.  The Classical Indian tradition and Eastern spiritual philosophy 

and music had a steering influence over a group of European and American composers 

that emerged from the 1900s who were labelled as modernist, avante garde, atonal, 

serialist, dissonant, and minimalist.

Deeply concerned with the implications of the advancing technological world and affected by 

the impact of World War and the great Depression, they began to ask questions about 

music; its nature, structure and purpose.  These artists particularly set out to shake the 

foundations of formal musical structure.  Their music was mostly dissonant, chaotic, and

deconstructed.  Its purpose for them was much less about entertainment and more about consciously

finding something which was profound and purposeful.

This paper aims to explore the use of drones and dissonance in relation to a small selection 

of these composers; Dane Rudhyar, John Cage, Karlheinz Stockhausen, Ruth Crawford, 

Arvo Part, and David Hykes.  It also aims to look at their interest in Eastern philosophy and 

to enquire into the nature of drones and dissonance to see whether they might have some 

kind of ability to induce a profound or spiritual experience. It poses to raise the question of 

what makes music spiritual and to look at whether dissonant drones have a particular quality 

about them that can induce a spiritual experience.

https://www.soundtravels.co.uk/a-Dissonance__Drones__A_Spiritual_Experience-316.aspx

https://www.abc.net.au/radionational/programs/earshot/monotony-and-the-sacred/6448906

https://en.wikipedia.org/wiki/Drone_(music)

https://www.ableton.com/en/blog/drone-lab-creating-sustained-sounds-in-live-11/

https://www.screensoundjournal.org/issues/n1/06.%20SSJ%20n1%20Hayward.pdf

https://www.taylorfrancis.com/chapters/edit/10.4324/9781315249131-27/spirituality-mental-health-integrative-dimension

https://search.informit.org/doi/pdf/10.3316/informit.336502644557699

Categories
Specialising & Exhibiting Unit 01

Recording Foley For ‘We Need To Talk About Kevin’

Using the knowledge gained from the Pro Tools Linkedin short course, my classmate and I booked out the Composition Studio and Foley Room in order to record the Foley to the opening scene of Lynne Ramsay’s film ‘We Need To Talk About Kevin’.

After configuring the gain structure on both pro tools and the external preamp found in the composition lab I set about creating a new session in Pro-tools. I had in fact already prepared a session in the previous lecture, with the clip loaded, identifying where all the most important cuts were for atmospheres, FX and syncing of image to sound. When setting up the initial blank session, I set it to the audio file standard for working with video: 48khz, BWF (.wav), 16bit. For the i/o settings I chose stereo mix and then named and saved the new session. From there I created as many new mono or stereo audio tracks that I felt I needed, taking into account that we were overlaying and recording atmospheres and foley.

Using a Sennheiser 416 Shotgun mic, we recorded footsteps, cloth sounds, glass clinking as well as other sounds, syncing them up to the image. Given the dreamlike quality of the opening scene, instead of trying to recreate all of the sounds in the scene, which would have been a hefty task considering the sonic content of the scene, we opted to create various drones, time stretching and manipulating them using open source software such as Cecilia and Paulstretch, before overlaying them in order to create a ghostly effect. Unfortunately, due to gain staging issues, much of the initial foley was recorded too quiet, giving a very high noise floor when mixed appropriately. While this means we may have to re-record much of the foley, it is valuable lesson in avoiding the same mistake in future.

Categories
Specialising & Exhibiting Unit 01

Pro-Tools Linked-in Learning

Having gone through the Pro-Tools Linked in Learning I am now more familiarised with working and mixing for film within Pro-tools. Some useful tips and techniques I have come away with include, but are not limited to:

  • Changing the editing mode in the top right of the Pro-tools browser to Grid will allow me to keep my cursor accurate to the frame boundaries – Then we can change the grid value to reference frames instead of seconds – this way we can also make the background grid accurate to the frame and not the seconds.
  • Going to slip mode will allow me to go to a finer resolution when needed
  • It is useful and time-saving to separate the different elements of a film sound and route them to different outputs for ease of use in a later stage of work – these elements can refer to dialogue, music and sound effects
  • We can achieve this by sending each elemental group of tracks into an auxiliary track which acts as a bus. – essentially an auxiliary track acts as a pathway to route audio from one place to another.
  • This is done by redirecting the output of all the different tracks in a certain element/ group into the auxiliary track and changing the input of the auxiliary track to bus 1.
  • Its a good idea to colour code stems to keep track
  • Using these techniques we can make and save a working template which can act as a starting template for all my projects.
  • Using timecode and markers, that are labelled clearly, are also useful to set up before a recording session in order to identify the key scenes/ cuts for fx and atmosphere as well as sync points for image and sound.
Categories
Specialising & Exhibiting Unit 01

Storytelling Through Sound

LECTURE FOLLOW UP

Off-Screen vs. On-Screen Sound

After watching … the importance of off screen sound is reinforced. If done well enough, it should be so well integrated into the world that is being depicted, that the average listener will most likely take what they’re hearing for granted. However it is the off-screen sounds and their many story telling functions that bring much context to the film, including the mood and location. These sounds ultimately have the power to subtly steer the film in a certain direction and is a constant reminder that there is a world out there that exists beyond the frame that limits what we see within its boundaries. Off screen sound can be as conventional as purely setting the scene but can also be used in more abstract ways like David Lynch and Roman Polanski’s emphasis on uncanny off-screen sounds to promote paranoia.

Point of View

The short animation film ‘Dustin’ that we watched during a lecture with Jessica had many scenes that were in 1st person perspective, specifically the dog’s perspective. It made me wonder about how things such as mic choice and placements as well as mixing could help recreate varying perspectives in a film, and perhaps in my eventual hand-in.

Diegetic vs. Non Diegetic Sound

Going over these terms again during these lectures have reinforced what I had learnt last year when studying the film ‘You Were Never Really Here’. Diegetic sound plays an obvious role in setting the narrative of a film. Non diegetic sound however can include things like narration, external music and added sound effects. Whilst diegetic elements are malleable, I find it is non-diegetic sounds that can completely alter the feel of a scene. Thinking back to the opening scene of ‘You Were Never Really Here’, Johnny Greenwood’s score really sets the rhythm of the movie with its disharmonic percussion melded with the diegetic sounds of the city, creating a whole new soundtrack in a way.

Rythm & Emotion

Going on from the last paragraph, the concept of rhythm that we also touched on in class takes me back to the Making Waves documentary, in which sound editor Teresa Eckton talked about creating a pattern when overlaying the sounds of the machine guns in the disorientating opening scene of ‘Saving Private Ryan’. This order and pattern within the chaos can help the audience keep their grounding and anchor a scene. The world is full of rhythm and this notion opens up many possibilities. Using the principles of rhythm, everything from the way one breathes to the sounds in our environment, irregularities in volume and much more can be utilised to bring or take away tension. Watching Osbert Parker’s ‘Timeline’ trailer really showed me how field recordings could be combined in a way to create an ever-changing tempo, and through this tempo an aural story.

Categories
Specialising & Exhibiting Unit 01

SOUND FOR SCREEN WEEK 3 – CHION DEFINITIONS

After our first lecture with Jessica I found myself mulling over the many different ways in which sound affects, alters, modifies and adds new meaning to moving image. After touching on the French film theorist and experimental music composer, Michael Chion’s book Audio Vision, I decided to rent it out from the library in order to build on the terms we’d been introduced to.

One of these was ‘Acousmetre’, A sound that is heard but not seen, therefore shrouded in mystery and given an air of omniscience, much like in the Wizard of Oz. What I found interesting was the loss of imagined power when the source of an acousmatic voice is revealed to its audience and how this could be wielded for creative effect.

Another was ‘Synchresis‘, referring to the forging between something one sees and something one hears, and how this syncing of sound and image allows for its reassociation. A better way to put it would perhaps be how the combination of sound and image will become one perceived thing and not two separate entities playing in unison. Examples are seen in the film ‘Mon Oncle Tati’ where ping pong balls and glass objects were used for the noise of footsteps. “Certain audiovisual combinations will come together through synthesis and reinforce each other”.

From what I have gauged, Chion tried to communicate the importance of how effects are perceived by the audience as a whole, instead of solely concentrating on the individual components of a film. One of the more intriguing terms of his I found was ‘Sound en Creux‘. Directly translated to ‘Sound in the Gap’, Sound en Creux points to the silence we hear in between the sounds in a film and how it is the sound designers duty to recognise the intimacy and emotional intensity of these ‘gaps’. The silence in between music/ dialogue is what sets the scene, and glues the film together and thus through these gaps we are given the opportunity to subtly compose the overarching theme of the film.

Categories
Specialising & Exhibiting Unit 01

Making Waves: The Art of Cinematic Sound 

After watching this documentary during our lecture I decided to watch it again when home. An extensive look into the timeline and development of sound in film, it really put into perspective how much I have personally taken sound for granted when watching films, for without it, those moments would’ve been an entirely different experience.

Starting somewhere around the invention of the phonograph, the documentary shows us, not only how sound has evolved, but how its role was given increasing importance as time went on. Interestingly enough I learnt that the phonograph, and its groundbreaking ability at the time to capture sound, was in fact invented before the motion picture camera, which was initially created by Thomas Edison so that he could put images to go along with the sounds from his phonograph. Sound came first, image came second… A stark contrast to the way in which the films of the following couple of decades were made.

A SHORT HISTORY

Giving context to the origins of sound effects and foley, the documentary touched briefly on the syncing issues of sound and image in the early 20th century. This meant that films were projected and scored with full live orchestras, as well as people talking and making live sound effects in real time behind the screen. When I think of this I imagine that the experience of such films were much more theatrical in nature.

As these issues were solved with evolving technologies, films were eventually recording dialogue on set by 1927. Whilst Hollywood had developed a way of shooting movies without sound up till then, giving them the freedom to not have to worry about noisiness on a set, they were now required to entomb the productions in sound stages so all sound was blocked out from the outside world. However this disadvantage paid off as audiences of the time were in awe of this newfound marriage of image and sound as it brought about another level of emotional dimension to the film in question.

From here, the addition of voice lead to the increasing importance on the practice of making sound effects. It was quickly discovered that it was not feasible to get all the sounds needed for a scene just by hanging a mic over the set. Which brought about the birth of the song editor, sound designer and foley artist.

SOME KEY FIGURES IN THE DEVELOPMENT OF FILM SOUND

Not long after, in 1933, Murray Spivack was one of the first to revolutionise the early ideas of sound design and many of the techniques we use to manipulate sound today were pioneered by him on the original rendition of King Kong. By slowing down the roars of lions and combining it with tigers growling in reverse, Spivack formed the basis of both King Kong’s and the dinosaurs sound signatures.

Walter Murch, who went on to be a pivotal figure for modern sound design, found his love for sound through a tape recorder on which he would splice, rearrange, reverse and use other techniques to manipulate recordings he’d made. Unlike others, he was initially turned off by the idea of making sound for moving image, finding that the sound in many of the films he’d seen growing up were underwhelming, overused stock restricted by the factory mindset of Hollywood, and second place to over-emphasised scoring. It was the ‘Musique Concrete’ works of sound innovators, Pierre Henry and Pierre Schafer, that validated his love for sound manipulation and showed him that what he was doing had a much broader application. And so at university he decided to pursue film sound, where he met George Lucas and Francis Ford Coppola, with whom he went on to work with on films such as The Godfather and Apocalypse Now.

Films like the Godfather, however much praise it was given, was still made and broadcast in mono, utilising a single speaker behind the screen. Taking cues from the music industry, people such as Barbara Streisand on ‘A Star is Born’ saw the value of a stereo sound system and eventually Dolby started to provide this on a wide scale for the film industry.

Francis Ford Coppola took this a step further in the film ‘Apocalypse Now’, ultimately changing the way cinema was presented from then on. Inspired by a 4 channel rendition of Gustav Holst’s classical piece, The Planets’, by composer and electronic musician Isao Tomita, he requested the sound department that the final listening experience was to be likened to a speaker being in each corner of a room, with the listener in the middle (essentially a quadrophonic format). This opened up a world of possibilities with spatialisation, such as the panning of helicopter blades around the room, increasing the immersion and degree of reality in an already vivid film. The film ran in a six track surround format and as things have evolved since, that format is now the standard of how we mix films today.

SOUND DESIGNERS AND THE IMPORTANCE OF EXPERIMENTATION

Another person, by the name of Ben Burt was hired to do the sound design for the American epic space-opera, Star Wars, directed by George Lucas. Recording the sound of a bear seems dangerous, yet it was these sounds that Burt manipulated to create the famous sound of the beloved Wookie. This process lead to him recording sounds for every other sound effect in the movie. R2D2’s signature robot language underwent many trials and errors until Burt found that using a vocoder on a synth allowed him to give the robots speech verbal expressiveness that ultimately allowed the audience to connect with its character on a deeper level.
What I think sets apart Star Wars from its other sci-fi counterparts of the time, such as ‘War of the Worlds’ and the ‘Forbidden Planet’, is that it moved away from the typical sound conventions of synthesis and utilisation of electronic music technology found in such films. Most of the sounds were created from real recordings, which perhaps made them more relatable but also unique.

We can see this relatability again in the well known Pixar mascot, Luxo Jr, that now graces our screens before every Pixar movie. The sound design for this seemingly sentient lamp was created by Gary Rydstrom. I learnt that he would take countless recordings of things, most of which he had no idea what they’d end up being used for. Using his Synclavier, an early digital synthesizer, polyphonic digital sampling system, and music workstation, he would manipulate these sounds and found that some of them had an almost human-like emotive vocal quality. We can see in his later work on Toy Story that he continued to make sure that the sound he used supported the emotional intention of the narrative, such as the difference between Woody and Buzz Lightyear’s sound effects.

We can find emotion in the most unassuming of sounds and this humanisation of it, in a way, makes me feel closer to the art-form of sound design. Bringing soul into what might be initially perceived as mundane, is a very revitalising notion.

Nowadays with the intimacy we get from a boom mic, the capabilities of our softwares and the vast multichannel sound systems that are available it is easy to forget the journey that sound as a medium in film has taken to get to the point it is at now. This documentary has not only shown me this but has also introduced me to several sound designers whom have demonstrated that it is okay to be brave with sound design and recording, and not to follow convention. A very inspiring documentary that has given me plenty of inspiration for techniques and experimentation that will surely keep me busy for a while.