Categories
Collaboration (VR)

Phonebooth – Designation Of Roles, Individual Contribution & First Crits

After creating an excel file and organising all the different components needed for our game with my group members, we discussed what roles should be designated to whom. Being given the task of remaining in charge of foley and atmospheres I decided to take my handheld recorder to a phone box on a busy road near mine to recreate the events in the game as accurately as possible.

Before this however, I made sure to meet up with the games students one more time, in order to play the game again, refreshing the mechanics and the general feel of the game in my memory.

After getting them to send me a video file of the game I overlaid it with the atmospheres and foley we had recorded prior to this. This experience provided to be quite rewarding in how the sonics came together. While there was some miscommunication on the musical aspect of things, I decided to make a simple composition using a Tibetan bowl, manipulated in Ableton to create an eeriness to the parts of the game that required it. When doing this however, it was with the intention of being temporary, eventually being replaced by a more collaborative effort for the eventual hand-in.

When evaluating our progress after showcasing our work to the rest of our class during the crits, it became apparent that we needed to focus more time on the immersive nature of the game, particularly so as it is virtual reality.

Categories
Collaboration (VR)

Recording Foley for Phonebooth VR Game

In the very early stages of designating roles to one another, we decided to visit the foley room as a group in order to take the first draft recordings for the game. We felt that by coming together to complete to the first task, we would gain more clarity on the other dimensions of the game, making it easier to designate roles within the group. In this session we recorded footsteps, mechanical sounds of the phone-box and the sounds of doors shutting using a Sennheiser 416 shotgun microphone. Whilst these sounds may not be used in the final product, if at all, we decided to get the ball rolling. This way we can trial and error sounds with the Games students and rerecord sounds accordingly.

As we are expecting to eventually put the sounds into Unity when mixing down the final product, we decided to record one shots of each sound required as well, in order to keep in line with the nature of working in Unity or FMod.

At Hwyels disposal, we used his vintage telephone mic to sample some recordings of the dialogue. Using this beat-down microphone, we thought, would give the audio an interesting nature. Unfortunately however, the audio was too distorted to be used due to a lack of a powerful enough pre-amp. Nevertheless the distortion gave us ideas on how to possible manipulate the dialogue using audio effects in the final mixdown.

An issue we ran into was the multichannel nature of the recordings as we incorrectly formatted the Zoom F4 that we were using whilst recording. However this was swiftly fixed by Inaki, extracting the separate audio channels on pro tools.

Categories
Collaboration (VR)

Collaborative Project – Phonebooth (VR)

‘A VR Narrative game, addressing the effects of depression and the process of reaching out to people around us. The player will call-up friends and family and make dialogue choices over the duration of their conversations. The consequences of your choices will visibly impact the environment around the player’.

After meeting with the Game students, myself and my classmates, Hywel and Inaki, had a better idea of what they wanted the sonically aesthetic nature of their game to be. Contrasts between a feeling of clarity and tension we discussed could perhaps be portrayed through music and atmospheres. Dialogue could also be warped to display the contrast of emotion too. Simply put, they required atmospheres, foley and dialogue. Music it seems has been composed already, by someone they know, however we may debate this with them in order to add our own compositions that we feel would benefit the game more than the simple single piano melody they had shown us during the meeting.

On an overall basis, the nature of the game seemed to us very sonically barren. The beginning and end of the game takes place in an empty black cavernous space, while the climax takes place in one spot, that being the phonebox. As a result we weren’t able to record much in the way of foley as it seemed like at this point there was not much of it needed. However the space left for a sonic environment meant that immersion through atmospheres and music may have to be the main focus of the game

Categories
Global Sonic Cultures

How Interacting Is Different To Listening: Reflection

In the introduction to Collins’ book ‘Playing With Sound: A Theory of Interacting with Sound and Music in Video Games’ she begins by outlining how passive listening has made up the majority of our perceptions of sound in media. As a result we have a limited scope of terminologies and methodologies with which we can approach the player’s relationship to sound in video games. Drawing on the importance of keeping the ‘player’ in mind when approaching sound in media, she goes on to explore multiple different theories on listening, but also interactivity as a whole in order to help us better understand what interaction entails so that she can seemingly use this a foundational basis to apply theories of sound within interactivity in later chapters.

Using Chion’s categorisation of the three basic listening modes:

Causal Listening: refers to the act of associating a sound to its producing action, whether it be conscious or not

Semantic listening: refers to the act of deciphering or interpreting messages in sounds that are bound by semantics , applied for example in a linguistic sense

Reduced listening: refers to the act of listening the traits of a sound, such as its tone or timbre (its acoustic properties)

She explains how none of these modes of listening are mutually exclusive of one another as the player may be listening in several different ways at once. Nevertheless she explains how these modes of listening, as individual approaches, can change the way a player experiences a game, using the sound of a signal beep in the game Fallout 3 as an example. Using these three modes of listening we can determine where the sound is coming from and what has produced it, what the signal is perhaps trying to tell us and also where on the frequency spectrum the sound lies.

What seemed even more so relevant in regards to interactivity was her expansion on these listening modes, using the musicologists, David Huron’s listening modes that he intended to apply for music. The first being Signal Listening, refers to hearing a sound in anticipation, implying a subsequent action. Using ‘New Super Mario Bros’ she touches on how players must listen to music time their attacks, demonstrating the presence of signal listening and an interaction between player and sound. This mode of listening can also help players determine navigational information, status information and semiotic information.
One of the other of Huron’s modes of listening that I found to be of particular importance was retentive listening, in which we try to remember what we have heard with the intention of repeating it. In a gaming context, an example of this would be when a player is required to actively remember a sequence of sounds in order to carry out a certain task. Collin’s refashioning of Huron’s listening modes in the context of video games helps us understand the intricacies of sound in interactivity on a deeper level.

In regards to interactivity as a concept in itself, she touches on how cognitive/ psychological reactions ‘always occur alongside other interactions in games’, putting it at the centre of all other forms of interaction, be it physical, perceptual, socio-cultural or interpersonal. She states that there is a danger in interpreting the word ‘interaction’ too literally, by equating it to a physical interaction between a user and a media object. Moving on from this she mentions that experimental games such as ‘Alpha World Of Warcraft, have demonstrated that players can use their alpha brain waves to change gameplay, thus further confusing any differences between the physical and the psychological’.
When thinking on how this progression of technology might apply to sound, it becomes a fascinating to consider, especially so when relating it to Collin’s ideas on evoking sounds vs creating sounds. If we are able to create sounds in a game using brain waves, what would the limits be on what could be created, and how would they be implemented? If the limits allowed the player to create unique sounds then would this then place the player as co-creator of aspects within the game? In correspondence with my thoughts, Collin’s too touches on the axis of creator and audience as interactivity increases as a result of technological feats.

She concludes the introduction to her book by talking about a term called ’embodied cognition’. This term theorises that ‘our cognitive processes use reactivations of sensory and motor states from our past experience.’ In order to understand further, a google search of this term gave the definition: ‘Embodied cognition is an approach to cognition that has roots in motor behavior. This approach emphasises that cognition typically involves acting with a physical body on an environment in which that body is immersed’. From this we can infer that sound can be explored in many ways through the medium of mentally reenacting our physical embodied knowledge .

Categories
Global Sonic Cultures

Thoughts On Practical Component

“In ambient music, Eno often imitated or borrowed sounds from existing locations, and organised them in compositions to produce new environments. He often used tape loops of differing lengths played simultaneously so that their interaction randomly produced ‘sound events in periodic clusters’, in much the same way the sounds of frogs, insects and birds in a natural environment occasionally seem to express chords and melodies.” – Oblique Music – pg 90

“Eric Tamm has suggested that texture and timber may be off the essence in the ambient style, but a few general remarks may clarify the style’s use of rhythm and harmony’. He examined thirty ambient pieces, and found that eleven of these dispensed with pulse altogether, the rhythm consisting of a gentle ebb and flow of instrumental colours.”

Harmonically, Eno’s ambient pieces often use static or ambiguous harmonies, sometimes suggestive of chords but just as often consisting of nothing but a drone with… pitches drawn from a diatonic pitch set appearing and disappearing. — Two textural principles – layering and TIMBRAL HOMOGENEITY. These principles were combined so that a typical ambient piece by Eno was composed of 3 to 7 distinct timbral layers.

Categories
Global Sonic Cultures

Ambience & Spirituality

The world of music composition, especially in the USA in the 1960’s has deep connections with spirituality, many composers were influenced and inspired by music and spirituality of the East (primarily India, China, Japan). The use of drones and extended durations is perhaps the most obvious result of this interaction – “The Theatre of Eternal Music delved fully into the acoustical universe of single sustained tones, compounding their deeply droning sound with extended duration, bringing each performer into a unified state” (LaBelle, 2006, p.71), also, about Young – “His music, in a sense, strives for the actualisation of the very perceptual tones, loud volumes, extended durations, harmonic frequencies, all encompass and overarching sonic commitment that seeks to make sound an experiential event beyond the human limits of time and space, exploiting the ear as a physiological device and the mind in its moment of perception of sound stimuli.”, and “Duration for Young is not a question of minutes and hours, but days and years. As Philip Glass proposes – “This music is not characterised by argument and development. It has disposed of traditional concepts that were closely linked to real time, clock-time…” (p. 73)

https://www.rastoropov.co.uk/arts/sound-art/

Tom Murphy: When you were studying Eastern mysticism did you find any connections between what you learned that route and the music around you at the time? How would you describe those connections?

Laraaji: I observed that drone music at that time reflected the sensation of eternal present time which is emphasized in eastern philosophy—the continuum of consciousness. Also deep yogic level relaxation and meditation as reflected in the music of Stephen Halpern. The heightened sensation of bliss and ecstasy as reflected in the music of Iasos at the time in the late 1970’s. Terry Reilly.

How did you turn a zither into an electronic instrument? Was anyone doing anything comparable at the time you started doing that? Did you process those sounds early on or was it more for amplification?

My first autoharp/zither was acoustic. And after exploring alternative tunings I investigated ways to amplify it. [I then purchased] an electric pickup made especially for autoharps. I dove into amplified autoharp/zither research and decided to add sound treatment with the MXR 90 Phase shifter. After recording the album Day of Radiance with producer Brian Eno my interest in other [effects] pedals expanded to include chorus, delays, flangers and reverb.

How did you meet Brian Eno and as a producer how involved was in shaping the sound of Day of Radiance?

Brian introduced himself to me while I was playing Washington Square Park [in New York City in] 1978 and extended the invite to join him in his Ambient album productions. His suggestions to depend more on live studio microphones and Eventide effects, mixing as well as overdubbing a second zither helped to shape the Day Of Radiance sound.

https://queencitysoundsandart.wordpress.com/2019/07/12/ambient-music-pioneer-laraaji-on-sound-and-spiritual-practice-vision-songs-and-laughter-meditation/

There is something primevally grounding and simultaneously mystical about the penetrating 

hum of a drone – whether it be Tibetan deep chant, Japanese gagaku, Scottish pibroch

piping, Aboriginal didgeridoo, or Hindustani classical music. [1]  A lot of this music has spiritual 

connotations and uses.  The Classical Indian tradition and Eastern spiritual philosophy 

and music had a steering influence over a group of European and American composers 

that emerged from the 1900s who were labelled as modernist, avante garde, atonal, 

serialist, dissonant, and minimalist.

Deeply concerned with the implications of the advancing technological world and affected by 

the impact of World War and the great Depression, they began to ask questions about 

music; its nature, structure and purpose.  These artists particularly set out to shake the 

foundations of formal musical structure.  Their music was mostly dissonant, chaotic, and

deconstructed.  Its purpose for them was much less about entertainment and more about consciously

finding something which was profound and purposeful.

This paper aims to explore the use of drones and dissonance in relation to a small selection 

of these composers; Dane Rudhyar, John Cage, Karlheinz Stockhausen, Ruth Crawford, 

Arvo Part, and David Hykes.  It also aims to look at their interest in Eastern philosophy and 

to enquire into the nature of drones and dissonance to see whether they might have some 

kind of ability to induce a profound or spiritual experience. It poses to raise the question of 

what makes music spiritual and to look at whether dissonant drones have a particular quality 

about them that can induce a spiritual experience.

https://www.soundtravels.co.uk/a-Dissonance__Drones__A_Spiritual_Experience-316.aspx

https://www.abc.net.au/radionational/programs/earshot/monotony-and-the-sacred/6448906

https://en.wikipedia.org/wiki/Drone_(music)

https://www.ableton.com/en/blog/drone-lab-creating-sustained-sounds-in-live-11/

https://www.screensoundjournal.org/issues/n1/06.%20SSJ%20n1%20Hayward.pdf

https://www.taylorfrancis.com/chapters/edit/10.4324/9781315249131-27/spirituality-mental-health-integrative-dimension

https://search.informit.org/doi/pdf/10.3316/informit.336502644557699

Categories
Specialising & Exhibiting Unit 01

Recording Foley For ‘We Need To Talk About Kevin’

Using the knowledge gained from the Pro Tools Linkedin short course, my classmate and I booked out the Composition Studio and Foley Room in order to record the Foley to the opening scene of Lynne Ramsay’s film ‘We Need To Talk About Kevin’.

After configuring the gain structure on both pro tools and the external preamp found in the composition lab I set about creating a new session in Pro-tools. I had in fact already prepared a session in the previous lecture, with the clip loaded, identifying where all the most important cuts were for atmospheres, FX and syncing of image to sound. When setting up the initial blank session, I set it to the audio file standard for working with video: 48khz, BWF (.wav), 16bit. For the i/o settings I chose stereo mix and then named and saved the new session. From there I created as many new mono or stereo audio tracks that I felt I needed, taking into account that we were overlaying and recording atmospheres and foley.

Using a Sennheiser 416 Shotgun mic, we recorded footsteps, cloth sounds, glass clinking as well as other sounds, syncing them up to the image. Given the dreamlike quality of the opening scene, instead of trying to recreate all of the sounds in the scene, which would have been a hefty task considering the sonic content of the scene, we opted to create various drones, time stretching and manipulating them using open source software such as Cecilia and Paulstretch, before overlaying them in order to create a ghostly effect. Unfortunately, due to gain staging issues, much of the initial foley was recorded too quiet, giving a very high noise floor when mixed appropriately. While this means we may have to re-record much of the foley, it is valuable lesson in avoiding the same mistake in future.

Categories
Specialising & Exhibiting Unit 01

Pro-Tools Linked-in Learning

Having gone through the Pro-Tools Linked in Learning I am now more familiarised with working and mixing for film within Pro-tools. Some useful tips and techniques I have come away with include, but are not limited to:

  • Changing the editing mode in the top right of the Pro-tools browser to Grid will allow me to keep my cursor accurate to the frame boundaries – Then we can change the grid value to reference frames instead of seconds – this way we can also make the background grid accurate to the frame and not the seconds.
  • Going to slip mode will allow me to go to a finer resolution when needed
  • It is useful and time-saving to separate the different elements of a film sound and route them to different outputs for ease of use in a later stage of work – these elements can refer to dialogue, music and sound effects
  • We can achieve this by sending each elemental group of tracks into an auxiliary track which acts as a bus. – essentially an auxiliary track acts as a pathway to route audio from one place to another.
  • This is done by redirecting the output of all the different tracks in a certain element/ group into the auxiliary track and changing the input of the auxiliary track to bus 1.
  • Its a good idea to colour code stems to keep track
  • Using these techniques we can make and save a working template which can act as a starting template for all my projects.
  • Using timecode and markers, that are labelled clearly, are also useful to set up before a recording session in order to identify the key scenes/ cuts for fx and atmosphere as well as sync points for image and sound.
Categories
Specialising & Exhibiting Unit 01

Storytelling Through Sound

LECTURE FOLLOW UP

Off-Screen vs. On-Screen Sound

After watching … the importance of off screen sound is reinforced. If done well enough, it should be so well integrated into the world that is being depicted, that the average listener will most likely take what they’re hearing for granted. However it is the off-screen sounds and their many story telling functions that bring much context to the film, including the mood and location. These sounds ultimately have the power to subtly steer the film in a certain direction and is a constant reminder that there is a world out there that exists beyond the frame that limits what we see within its boundaries. Off screen sound can be as conventional as purely setting the scene but can also be used in more abstract ways like David Lynch and Roman Polanski’s emphasis on uncanny off-screen sounds to promote paranoia.

Point of View

The short animation film ‘Dustin’ that we watched during a lecture with Jessica had many scenes that were in 1st person perspective, specifically the dog’s perspective. It made me wonder about how things such as mic choice and placements as well as mixing could help recreate varying perspectives in a film, and perhaps in my eventual hand-in.

Diegetic vs. Non Diegetic Sound

Going over these terms again during these lectures have reinforced what I had learnt last year when studying the film ‘You Were Never Really Here’. Diegetic sound plays an obvious role in setting the narrative of a film. Non diegetic sound however can include things like narration, external music and added sound effects. Whilst diegetic elements are malleable, I find it is non-diegetic sounds that can completely alter the feel of a scene. Thinking back to the opening scene of ‘You Were Never Really Here’, Johnny Greenwood’s score really sets the rhythm of the movie with its disharmonic percussion melded with the diegetic sounds of the city, creating a whole new soundtrack in a way.

Rythm & Emotion

Going on from the last paragraph, the concept of rhythm that we also touched on in class takes me back to the Making Waves documentary, in which sound editor Teresa Eckton talked about creating a pattern when overlaying the sounds of the machine guns in the disorientating opening scene of ‘Saving Private Ryan’. This order and pattern within the chaos can help the audience keep their grounding and anchor a scene. The world is full of rhythm and this notion opens up many possibilities. Using the principles of rhythm, everything from the way one breathes to the sounds in our environment, irregularities in volume and much more can be utilised to bring or take away tension. Watching Osbert Parker’s ‘Timeline’ trailer really showed me how field recordings could be combined in a way to create an ever-changing tempo, and through this tempo an aural story.

Categories
Specialising & Exhibiting Unit 01

SOUND FOR SCREEN WEEK 3 – CHION DEFINITIONS

After our first lecture with Jessica I found myself mulling over the many different ways in which sound affects, alters, modifies and adds new meaning to moving image. After touching on the French film theorist and experimental music composer, Michael Chion’s book Audio Vision, I decided to rent it out from the library in order to build on the terms we’d been introduced to.

One of these was ‘Acousmetre’, A sound that is heard but not seen, therefore shrouded in mystery and given an air of omniscience, much like in the Wizard of Oz. What I found interesting was the loss of imagined power when the source of an acousmatic voice is revealed to its audience and how this could be wielded for creative effect.

Another was ‘Synchresis‘, referring to the forging between something one sees and something one hears, and how this syncing of sound and image allows for its reassociation. A better way to put it would perhaps be how the combination of sound and image will become one perceived thing and not two separate entities playing in unison. Examples are seen in the film ‘Mon Oncle Tati’ where ping pong balls and glass objects were used for the noise of footsteps. “Certain audiovisual combinations will come together through synthesis and reinforce each other”.

From what I have gauged, Chion tried to communicate the importance of how effects are perceived by the audience as a whole, instead of solely concentrating on the individual components of a film. One of the more intriguing terms of his I found was ‘Sound en Creux‘. Directly translated to ‘Sound in the Gap’, Sound en Creux points to the silence we hear in between the sounds in a film and how it is the sound designers duty to recognise the intimacy and emotional intensity of these ‘gaps’. The silence in between music/ dialogue is what sets the scene, and glues the film together and thus through these gaps we are given the opportunity to subtly compose the overarching theme of the film.