background image

Playing around with ambisonics


Justin Bennett has been in Bergen for the last week doing supervision with students etc. When I heard that he would be coming I managed to reserve him for a day so yesterday was spent in ambisonics nerd paradise. We’ve both been playing around with ambisonics for a while in order to see if we can find meaningful ways of using it in projects. So far I’ve only been working on mono and stereo sound sources that are mapped to a ambisonic field. Justin have gotten hold of a old SoundField microphone and he was kind enough to bring it to Bergen. We set up a horizontal hexagonal loudspeaker setup and started testing various ideas and techniques.

Justin played sound from several takes and projects where he have been working on ambisonics. He has been doing a series of sound time lapse projects mounting the mic somewhere usually at a roof or similar and then recording for a minute or so every half hour for 24 hours. The resulting recording he has compressed to a sound piece lasting e.g. 8 minutes scanning the sound of the city.

Justin also showed me a program (max-patch) he has been working on for quite a while and that has been used in several of his projects. Various sound sources are positioned at various spots in a two dimensional horizontal plane. By moving around in the plain the various sound sources will come near or move away from you. There is also the possibility of moving the sound sources rotate the sound sources (that have stereo information) and rotate the position of the virtual listener. I was very impressed by the what it sounded like. I’m not sure that I would hear or understand that I’m moving between various sources but I still got a strong impression that sources were near or far away and the change from listening to one source to another was extremely organic and plastic.

When I first started working on multi-speaker setups it was motivated by a desire to create a situation where the audience would be situated inside the sound. This technique definitively gave that impression.

Listening to surround recordings was a very different impression and I was surprised and maybe disappointed. It was a great richness in where the sound originated from but I always had a very strong sensation that the sound was coming from out there some sort of sphere surrounding me at various sides but never becoming one with me. I felt that the boarder between me and the sound was defined to a much higher degree. This is not to say that the technique was not effective it definitively was. But I was caught by surprise at how strongly I reacted to the resulting impression aesthetically it was very clear that this was not what I want rather the complete opposite. I felt a Descartian distinction or separation of mind and body that I was very uncomfortable with.

Later on I showed some of what I’ve been working on but first I had to show him the hipnoscope user interface of the Hipno plugins as there were so many resemblances between the idea behind that interface and how Justin have been positioning sound in space. Apart from that I showed him how I’ve been encoding stereo sources to b-format and the ambisonics objects by jasch and the ones developed at Centre de recherche Informatique et Création Musicale. For some reason the objects by jasch did not manage to position sound convincingly in the space. I don’t know why I’ve had them working well before so I’ll have to investigate further.

We ended the day by doing some b-format recordings that I can keep for further testing. I’ve always loved the sound of the cars passing by on the street down from my office. Today was the day to capture it.

Update: Looking into the externals developed at Centre de recherche Informatique et Création Musicale I discovered that they are now released with a GNU LGPL license. This implies that it will be possible to modifiy the code for inclusion in Jamoma.


comments powered by Disqus


Creative Commons License Licensed under a Creative Commons Attribution 3.0 Norway License. Web site hosted by BEK.