Categories
Personal/ Relevant

Ambisonics

In a stereo mix, if sounds are simply moving, this alone can be enough to stimulate our brains to stay engaged. Ambisonics, first developed in the 1970s, has had a recent resurgence due to the possibilites it offers in immersive audio for Virtual Reality. Ambisonics allow us to create a 360 degree sound field that can be decoded into a number of different formats

Most of the time when recording, we typically use two mics, forming whats known as an X-Y configuration. When played back, one mic goes to the left channel, while the other goes to the right creating whats known as stereo playback.

Yet we can also record in stereo using other methods such as Mid-side recording. In this technique a cardioid microphone (top) is recording the middle sound while the other mic (Bi directional) records both sides in a figure of 8 pattern. Since one mic is recording the centre space and the other is recording the sides, it would not be aurally viable to just assign them to the left and right speaker as we would with stereo recordings. As a result, the signals need to be added together and decoded. This is also the case for Ambisonic recordings.

With the mid side technique, two microphones create one dimension of audio – left and right. For ambisonic recording all we need now in addition to left and right is another bi directional microphone for front and back and another for up and down, giving us a 3D recording.

Most ambisonic recorders, such as the H3-VR, don’t use bi directional microphones. Instead they have 4 cardioid microphones set up in a tetrahedral mic array. This creates 4 audio tracks that we combine together to create a 3d sonic image .

The raw recording from one of these tetrahedral microphones comes in the form of a file known as the A-Format. These files are 4 channel audio files that contain the input of each mic in the tetrahedral array.

With the ambisonic tetrahedral array, the mics are separated into mid, side, up/down and front/back. More specifically they are normally labelled W for the centre omni-directional channel, X for the bi-directional front and back, Y for the left/ right and Z for the up/down.

Moreover, another file format known as B-format ambisonics, are essentially A-format files that have gone through one stage of conversion. There are two types of B-format files, namely AmbiX and FuMa. They order the channels in differeing ways and have different relative amplitudes so it is important to inspect the software or programme you intend to use the ambisonic files with, and understand what format is appropriate. One can convert AmbiX, FuMa and various other file formats between each other using softwares such as the Zoom Ambisonics Player or the downloadable Ambeo Orbit conversion tool.

Recording with first the tetrahedral mic array is known as first order Ambisonics. When more than four mics and audio channels are used, the resulting recording is known as higher order ambisonics. The additional tracks help create a fuller 360 degree sphere of sound, filling in the diagonal dead spots between the microphones.

Furthermore, ambisonic files must be decoded for playback. The great thing about ambisonic files is that they’re not channel dependent. This means that an ambisonic file can be decoded into any number of speaker configurations ( stereo or quad for e.g.)

Ambisonics offers us an intuitive way to of recording and hearing the sounds around us in a very realistic fashion. It use in immersive and virtual environments may hold answers to some of the future and contemporary audio challenges.

After going on a sound-walk with the H3-VR, I was able to familiarise myself with many elements of ambisonic recordings, such as the mics test tones and the various formats in which I could record in. I also experimented by recording certain sounds using different mic positions, specifically front facing, upside and end fire, in order to get a feel of how the recordings respective stereo images would differ when played back.

Categories
DEVICES Personal/ Relevant

Convolution Reverb

Convolution Reverb Pro is a sample based Max For Live device that allows one to digitally simulate the reverberations of a particular space from the real world, referred to as Impulse Responses (IR), to a sound, creating the illusion that the input was recorded in that space. It is essentially the process of filtering a source sound through a digitally stored room sample. The application of this can give a sense of roominess, but also tonal character and width.

The method used to capture the reverberations of a given space involves playing a loud sound, such as a gunshot, into an area, phasing it out and thereby leaving us with a sonic footprint of this space. This makes convolution reverb an invaluable tool compared to other, more standardised versions of digital reverb that typically use algorithms to simulate acoustic reverb.

I experimented with this by convolving a sample using a kick drum as an impulse response. I found I was able to accentuate the low frequencies of the kick within the sound of the sample, while also dampening the frequencies of the sample that aren’t in the kick IR File. This served as a useful tool, allowing me to achieve a clearer bass sound without having to use a low-pass filter which would have cut many of the samples harmonics.

Used creatively, this device allows one to explore various spaces of sound, including unconventional ones such as the kick I used, to introduce a whole new texture to a sound or body of work. Furthermore the same device facilitates the combination of two IRs, in early and late reflections respectively. This hybridisation of multiple space further adds to the versatility of the Convolution Reverb Pro.

Another interesting thing I found was that Ableton’s Hybrid Reverb actually combines both convolution and algorithmic reverbs.

Categories
DEVICES Personal/ Relevant

Shimmer Reverb

It is often found that artificial reverb effects use some combination of echoes with short delay times to recreate the acoustics of a particular space. By introducing enhanced harmonics of the input signal during the reverberation process we can create a shimmering effect. One such example of this would be to use real-time pitch shifting with feedback delay to acquire said harmonics.

Instead of using the shimmer reverb VSTs downloaded on my computer, I set about trying to recreate it myself, using Ableton’s stock plugins. Drawing up an Audio Effect Rack, I created an effects chain. It consisted of a Reverb, Grain Delay, Ping Pong Delay and another Reverb, in that order, as the distillation of ingredients in a shimmer effect is the combination of a sound diffusing (Reverb) whilst the resulting diffusion shifts in pitch (Grain Delay). I further spread this diffusion out with the following reverb and some slight delay.

Firstly, bypassing the first instance of Reverb, I went about setting the frequency of the Grain Delay to about 5Hz, taking the pitch all the way up to 12, and leaving its random pitch and feedback setting to 0. I altered the dry/wet mix to 75 percent and unchecked the sync box, allowing me to set the delay time all the way up to 128ms.

Turning the first Reverb back on, I increased it’s size and also the decay time to about 10 seconds. After applying some input processing by cutting out a little bit of the lows I brought the modulation speed up to just above 1Hz in early reflections. I then increased the volume of the diffuse slightly, all of which made the Grain Delay’s pitch shifting a little less prominent in the mix. The Ping Pong Delay, just after the Grain Delay also helped spread things out in the stereo field, so that nothing about the resulting shimmer effect was too centred, in order to achieve the appropriate ambience, allowing it to flourish in the background. Using another instance of Reverb at the end of the effects chain, with similar processing to the the one that preceded it, I continued to tweak settings, such as the feedback, in order to explore and apply more characteristics to the shimmer, aiming for a ethereal, cavernous feel.

Categories
Global Sonic Cultures Personal/ Relevant

Account of a Gig Based on Geertzian ‘Thick Description’

‘The term thick descriptions was first used by Ryle (1949) and later by Geertz (1973) who applied it in ethnography. Thick description refers to the detailed account of field experiences in which the researcher makes explicit the patterns of cultural and social relationships and puts them in context.’

In a shabby dim-lit basement room, underneath an Italian Restaurant in Dalston, somewhere around 50 sweaty people huddle around a low rise stage, occupied by various machines adorned with wires wrapping around one another like vines. Blueish purple hues of light intermittently scatter over their heads as a silhouette emerges onto the stage. As the overhead light reveals the figure in a flurry of deep red, wolf whistles and applaud fill the room, drowning out the remains of quiet chatter. Seemingly transfixed on the hardware in front of him, he proceeds to engage with its knobs and buttons. Before anyone has a chance to prepare, a cascade of notes dance into everyone’s ears, shortly followed by a wall of soothing bass tones. The crowd sways in response, moving as one, to the rich chordal harmonies emanating from a sound-system hidden behind a sea of people. As if in some form of premeditated choreography. As the music comes to an end the clink of glasses and the shuffle of footsteps slowly become audible again. A second figure steps onto the stage, warmly welcomed by cheers and whistles. After briefly addressing the crowd he makes a hand gesture to the first figure and the sound of a drum loop cuts through the room. Dusty in texture and solid in pocket, all members of the crowd succumb to its groove, invited to move their bodies in unison with it. Flesh rubs on flesh as people compete for space to express themselves through movement. The heat contained within this relatively small room starts to become more and more noticeable. Beads of sweat glimmer in the hazy lighting as the crowd, one by one, start stripping off their outer layers. The second figure starts to rap into a mic, decorating the drum loop with poetic efficiency. The heat, whilst borderline unbearable, is forgotten about for a brief moment as the crowd find themselves hypnotised by the performance. United in a common appreciation for this particular vibe, the crowd is comforted in the unspoken camaraderie that a shared music taste can bring.

Categories
Creative Sound Projects Personal/ Relevant

Radio Art – Wide-band WebSDR

‘Wide-band WebSDR is a web controlled receiver located at the amateur radio club ETGD at the University of Twente’ which can be used as a tool to explore frequency bands. Using the waterfall display, that graphically illustrates the signals across a frequency range, I quickly discovered through trial and error that the varying shades of purple showed where I could tune into radio stations. It seems as if the colour coded nature of the waterfall display attaches lighter shades of purple to stronger signals.

The waterfall’s visual take on frequency ranges and all the signals across its spectrum is quite intuitive and a lot easier to navigate than having to scan across bands, as you would on a traditional radio, in my opinion. It gives us more control to locate radio stations, jumping from frequency to frequency at will. I also found switching the view from waterfall to spectrum showed stronger signals using transients in place of colours.

Exploring this system, comprised of a Mini-Whip antenna and a homebuilt SDR board has been so insightful into the form of frequency spectrum in short-wave radio.

References

What is a waterfall display (2013). What is a waterfall display? [online] Amateur Radio Stack Exchange. Available at: https://ham.stackexchange.com/questions/889/what-is-a-waterfall-display#:~:text=A%20waterfall%20display%20is%20a,or%20strength%2C%20displayed%20over%20time.&text=Pictured%20in%20the%20image%20above%20is%20a%20number%20of%20signal%20traces. [Accessed 15 Feb. 2021].

Utwente.nl. (2012). Wide-band WebSDR in Enschede, the Netherlands. [online] Available at: http://websdr.ewi.utwente.nl:8901/ [Accessed 15 Feb. 2021].

Categories
Creative Sound Projects Personal/ Relevant

Radio Art – Locus Sonus

Exploring the Locus Onus app has been an eye opening experience, giving me much insight into the process of live streaming. I was given the opportunity to create a collage of sorts by melding various streams from different locations. Using Locus Onus’ sound map I decided to overlay a Wave Farm Pond Station in New York, Sounds picked up by a mic set up in a Brazilian rainforest and the auditory environment of a Dutch Farm. The ambience that ensued was fascinating as I was able to scultpure a completely different soundscape, giving the illusion that the resulting sound piece described a completely different location. The prevailing weather patterns, flora and fauna and general environment from all three rural locations gave rise to a very different sonic context. Specifically, I felt, a soundscape of a tropical beach, wind turning to waves. A subjective observation utilising the objectivity of the respective environments.

I also found the live stream broadcasting aspect much more engaging than listening to pre-recordings as the anticipation of discovering something unexpected in real-time felt exciting and inclusive.

Using my iphone to stream sounds from my garden via the Locus Cast app I pondered on the nature of streaming my own immediate environment and felt that it was interesting to hear sounds on my stream that have a place in my memory through day to day conscious and subconscious listening. Sounds that I may be used to, but now can hear in a different perspective. This idea of hearing your immediate environment in third person made it seem somewhat ghostly and voyeuristic, as if being watched (by myself). Simultaneously being the listener and the recorder. The Geo-locater (Sound map) added to this oppressive feeling of invasiveness of privacy and issues of self perception, despite having willingly chosen to stream my surroundings. A slight delay from my iphone to the actual stream (between 4 – 8 seconds) was also intriguing as it felt as if I was hearing a version of myself that no longer existed. Travelling through time via a sonic mirror.

Categories
Creative Sound Projects Personal/ Relevant

Radio Art – Lance Dann, The Flickerman (INC) 2009

I noticed many clever auditory techniques used in Lance Dann’s, The Flickerman. Immediately, the overlapping of hushed vocals stood out to me, creating a seamless conversation that also, however, felt like a collage of words. A sense of hurriedness is apparent through this and starts the piece with an air of nervousness and secretiveness. All of this is cleverly coloured with well placed drone sounds and deep gong like drums. I also picked up on the muffledness of the backing soundscape and the stretched, pitched down background vocals once the narrator says ‘and then everything started to happen really slowly’ as the first climax starts to begin. It seems that throughout the story the actual sounds involved in the tale, whether that be birds taking off or the sound of vocals, as opposed to external unrelated sound effects or instruments, are manipulated to accompany the scripts current mood, integrating the story itself with our auditory experience.

At times it seems as if the surrounding voices feel distorted as they get louder. This brings into question the grain of the human voice and its importance in radio art. Having been given the role of interviewer in my groups sound piece this observation may be useful to implement. Taking special care in how I present/ project my voice in order to compliment its theme. Though it may be more important in the actual editing of the vocals, exploring ways in which the tone of the voices involved can be altered.

Silence too plays an important role in emphasis, but also as a marker for the direction or culmination of events. This can be heard at the end of The Flickerman’s sound effect sequence, just before what seems quite obviously to be an explosion of some sort. The silence makes it all the more potent but also gives the listener an even more accurate idea of what is being experienced, as if the world stands still for us and the characters just before the inevitable explosion.

After this weeks lecture I discovered that the ‘whooshing’ that follows the silence was achieved by reversing a sample. Further to this, at another point in the piece, the sound of birds flapping their wings seems to be drowned in reverb. These examples of manipulating the stories inherent sounds to aid in the visual imagery will make the process of curating samples for the group collaboration more specific, as I’ll be searching for sounds and effects that actually identify with the piece.