Archive for September 2005
…and hardly any time for blogging. Kurt Ralske arrived in Bergen Sunday. We got all of the equipment to the gallery space Sunday evening. Today (or yesterday sort of) we’ve been working on how to use the room. Some surprising solutions that once more prove that I can’t finish the sound before I move into the gallery space. The solutions we ended up with is turning everything upside down soundwise.
Late in the evening/into the night working on the stuff for Generator.×.
The Max ToolBox by Nathanaël Lécaudé is one of the coolest things to come around in MaxMSP in a long while to improve the workflow. Along with all of the improved features for templates encapsulating and use of the clopboard that came as part of the Max 4.5.5 incrememntal upgrade some tedious tasks when patching in Max now takes much less time than they used to do.
This autumns there will be a series of seminars on music and philosphy at the Nordic Institute University of Bergen. Program below.
Seminar om musikk og filosofi høsten 2005
Onsdager 18.15-20.00 seminarrom 400 Nordisk institutt HF – bygget
Onsdag 14.000 september
– Musikkdramaturgi og estetikk –
Tore Vagn Lid Institut für angewandte Theaterwissenschaft
Justus-Liebig Universität Giessen
Onsdag 28.000 september
– Fenomenologi og musikkognisjon –
Ole Ludvig Krüger Filosofisk institutt UiB
Onsdag 12.000 oktober
– Musikk språk og kropp – Adorno om gester og musikk –
Erik Steinskog Griegakademiet Institutt for musikk UiB
Onsdag 26.000 oktober
– Retorikk og musikk. Retorisk teori knytt til musikalsk praksis –
Njål Vindenes Avdeling for lærerutdanning HiB
Onsdag 9.000 november
– Musikk og metaforikk –
Kristin Sampson og Vibeke Tellmann Filosofisk institutt UiB
Onsdag 23.000 november
– En fenomenologisk analyse av W. A. Mozarts 2.000 sats
– Andante Cantabile – fra Sonate i C dur Kv 330 –
Annabel Guaita Griegakademiet Institutt for musikk UiB
OBS! Sted: Griegakademiet rom 206
For mer informasjon om de enkelte seminarene ta kontakt med:
firstname.lastname@example.org email@example.com firstname.lastname@example.org
Thomas Chen has at some point suggested a B+ format combining 1st order ambisonic B-format with two additional channels L/R dry channel recording. He is elaborating on the idea here.
One of the really interesting aspects of this idea is the separation of dry signal and ambience so that the mix of dry and ambient signal can be dynamically mixed depending on the space used for playback as well as the preferences of the listener.
From the perspective of how I use to work on sound for (gallery) spaces one of the disadvantages of the format is that it assumes a frontal listening situation. B+ might work well for recreating a concert setting movie etc. but it’s not able to project the dry signal into three-dimentional space. Also the format is lacking the front center channel for dry signal that is often used in 5:1 for better localiasation of speech.
Maybe a better (but more expensive) approach would be to extend the format with two additional channels and make it a pair of two b-formated signals one for dry signal and another one for the ambient signal.
Chen has implemented modules for working with B+ in Creamware Sonar but I’m not sure if they have (yet) been released commercially.
I’ve asked on the Sursound mailing list how to convert a stereo signal to 1st order b-format and received several suggestions. The one implemented for the module posted here yesterday is the super stereo equations given here. Once I’ve finished with the Generator.x and White-out installations I’ll make a more generic patch that can be used to test all of them and wrap as a Pluggo VST so that it can be used by others as well to test the various solutions. Hopefully I’ll eb able to get that working for Mac and Windows.
Below I’ve been winding up the information and suggestions received.
All equations now notated as
M = L + R
S = L – R
j = 90° phase shift
a = Width control varying between 0 and 180°
j represents a wide band 90 degree phase difference between channels
- it’s the between channels bit that made it workable in the analog
days. You’d pass both the sum and the difference signals through
similar wideband phase shift circuits one of which was designed to
have 90 degrees of phase shift more than the other. However as Super
Stereo is really a way of using stereo in the context of a UHJ decoder
driven speaker system if you are working with B format exclusively
you may or may not find the extra phasiness inherent in this useful
for you. Your original idea of panning the two channels left and
right into the image works for me and is essentially what Thomas
Chen’s formula does. However this system works best with coincident
pair recordings and if yours are spaced pair the results may be more
variable. as both the sum and the difference signals are highly
frequency dependent ( and before some pounces on that yes so
do coincident pairs but at a much higher frequency…)
Michael’s solution circa late 80’s and used in most Ambisonic
decoders of that period:
W’ = 0.367 M + j 0.3586S
X’ = 0.529 M – j 0.404 S
Y’ = S
W’ = 0.717 M – 0.291 j S
X’ = 0.717 M + 0.291 j S
Y’ = 0.583 S
Note that you can vary the image width in this scheme by varying
the gain on the difference signal D.
W’ = 0.65S – 0.27jD(a /180)
X’ = 0.98S + 0.40jD(a /180)
Y’ = 0.75D(a /180)
Richard Lee suggests two additional solutions:
W’ = 1.230 M – j 0.499 S
X’ = 1.230 M + j 0.499 S
Y’ = S
W’ = 0.8667M – j 0.360 S
X’ = 1.307M – j 0.533 S
Y’ = S
In addition of stereo to B format in my B+ I add L+R to W L-R to Y
and L+R to X. By adjustment of X and Y you are able to get a good
stereo signal with
the B format.
Two ambisonic modules made using Jamoma. The first module takes three stereo signals and encode as 1st order b-format signal. The second module decodes 1st order b-format signal as binaural. All DSP processing in the upper module is done using one [matrix~ 6 3] object so computationally this is fairly cheap.
Coeffisients for the matrix are calculated using the Ircam FTM library. Decoding is done using an ambisonc decoding object by jasch and binaural transform using the Ircam forum Spat library.
A while ago (24 Aug 2005) Dave Hunt wrote about his experiments using reverbs in ambisonic settings at the Sursound mailing list. His mail suggested a number of different approaches that might be worth testing:
I seem to have been fiddling around with this for ages. There are many
possible parameters: basic geometry and dimensions of the space
reflection/absorbtion/dispersion characteristics of the surfaces and their
distribution their directional characteristics the directional
characteristics and position in the ‘space’ of the source etc.
For existing spaces there is the possibilty of B-Format (or higher) impulse
responses and convolution. This is what I gather is used in the Waves
convolution reverb. However I doubt that this is attempting the next to
last of the above parameters (the one before the etc.)
This gives rise to a problem of ‘reverb architecture’. Good synthesised
reverb takes a lot of DSP. Additionally it is often used as a ‘binding’
effect: sources are present in a common space. This and basic economics
leads to a send/return architecture. Individual sources are send to a
‘global’ reverb the basic parameter being ‘how much do you want’. In many
‘stereo’ reverbs the two inputs are summed to mono before being sent to the
reverb engine the stereo output being synthesised.
I’ve seen a couple of plug-in surround reverbs that attempt to get round
this: a panner in Digital Performer and Spat from Ircam (there may well be
others). They amount to a ‘reverb’ per channel and become a bit unusable
as the number of sources increases. The DSP count mounts and having to
adjust more than one to simulate a common space becomes very tedious.
It would seem that there should be a workable compromise solution. Some
sort of early reflection simulation on a per-source basis feeding a global
reverb. The basic dimensions of the space would be common to each source
and the global reverb allowing the space to be manipulated easily.
I started trying to calculate the early reflections of each source from the
six surfaces of a cuboid room delay intensity direction and simple hf
roll-off for each wall and then Ambisonic encode the result. This does
work but if you want it smooth enough to cope with moving sources and to
do several of them it soon gets pretty demanding. Extending this to other
shapes and incorporating secondary and tertiary reflections leads into
rather heavy maths and ray-tracing algorithms which seem to be more
developed in graphics and are mostly non-realtime. I decided to leave this
to better souls than myself.
I also encoutered the problem that you could move the source outside the
space (or the other way round) and the maths would break down. I never
worked out what to do about this apart from to issue a caveat. It still
In the interest of making something that would run on an affordable
computer I tried further heavy compromises. Take Ambisonically encoded
signals from Ambisonic panners scale them by something (either 1 if the
amount of reverb was to be constant with distance or say 1/sqrt(distance)
the amplitude of the direct signal varying as 1/(distance)) and feed them
into a more generalised global reverb.
Then assuming a cuboid room use a delay line for each of WXY& Z signals
lengths based on the dimensions of the room. Cross-channel feedback would
help simulate multiple refections. A low pass filter on the inputs and
feedback to the delays simulate hf absorbtion which can be different in
each dimension. Possibly one should ditch the W delay as the W signal
should probably be a combination of the others. The whole thing sounded
denser and nicer with it.
I experimented with following ths early reflection stage with a reverb
simulation for each of WXY& Z effectively four mono reverbs. Even using one
written by someone else which was a lot more efficient than my efforts It
was still hungry for DSP. Eventually I realised that this reverb was
probably best regarded as truly global and directionless just W. So
savings could be made by using just one mono reverb fed by all the signals.
This could be made ‘directional’ with a further all-pass filter for each
dimension parameters again based on the room size.
As the output is B-Format it could be further processed through
rotate/tumble/roll to simulate something other than facing one wall in a
box-shaped room. Dominance manipulations can also be done.
None of which really helps simulating or understanding what goes on inside
and outside a forest.
Having worked in a few tents in my time I have also been surprised how
reverberent they can be especially one made of modern synthetic materials.
Obviously there is still a lot of low frequency roll-off in the reverb
response but they can still be surprisingly ‘muddy’.
Two sound etudes. Both made by looping mutilated piano samples and applying some weird frozen reverb to them. I like the sound of highest pitched tones and I also find the noise introduced by the frozen reverb settings interesting. It’s a kind of blurring of a sound signal that to start of with is quite pure and harmonic in content.
Volume is meant to be fairly low but it’s recorded into the soundfiles with a bit of volume to avoid to much loss of fidelity.
Synful is a virtual synth with what appears to be a rather different approach. In one way it’s using additive synthesis but what makes it stand out is the way it is using analysed phrase data from prerecorded performances to create a more expressive and gestural result also paying attention to the transitions from one note to another in a fashion named Reconstructed phrase modelling. The demo examples sound really impressive. (But then again demo tunes always do. My rule of thumb is that if they are using jazz rock to demo synths you better stay away.)
O’Reilly is featuring an interview with the developer behind Synful Eric Lindemann
Musikkteknologidagene is taking place in Oslo September 30 and October 1.000 My installation at USF will open in the evening 30/9 so I won’t be able to make it but the program seems very interesting.
Mid-October there’s yet another seminar of the fellowship program. For this seminar we’ve been asked to present and raise an issue that is of importance to us in relation to our projects at the moment. We’ll be doing a 35-45 min. introduction/presentation and then there will be another 45 min. set aside for a general discussion using a pool of ± 15 of the attendees gathered among other fellows supervisors and members of the board. I imagine this can be a great opportunity for receiving valuable input from several different perspectives.
We’ve had to write up a short text describing the issue we want to discuss. Below is a rough translation of what I’d like to discuss:
Sound in space – space in sound
Work on setups of multiple loudspeakers is an integral part of installations carried out as part of my my fellowship project. In the later years I’ve tried several strategies for distributing sound between loudspeakers. Different techniques and solutions generally depends on different sets of preconditions assumptions aims and practical considerations. One way of categorizing various approaches could be:
- Solutions aiming at a neutral and authentic reproduction of the sound in the room being used for listening.
- Techniques aiming at creating the illusion of being in another room (e.g. a concert hall or church) and aiming at doing so as convincing as possible.
- Synthetic techniques not aiming at reproducing any existing room but rather using the techniques at hand to create an illusion of virtual space that might not possibly exist but still (or for that reason) is of artistic interest.
When I first started using multiple loudspeakers I wanted to create settings where the audience would by physically position inside the sound. Another interest is to use sound and sound positioning to describe and create a consciousness about the room you are in a more sculptural approach. David Toop discuss the relationship of sound space and memory in his book – Haunted Weather – . Brian Eno have been talking about creating music – – where I suddenly was in some way connecting with another place or another time – and using delays filters reverb etc. in unorthodox ways to achieve this. To me it’s getting more and more obvious that work on sound and space can become a strong expressive means and it is an aspect of my work that I want to investigate further.
Riksutstillinger (I’m not sure what their proper title is now that they have been integrated as part of Nasjonalmuséet for kunst design og arkitektur) are currently building custom rooms that will be used for the various installations at the Generator.x exhibition. Yesterday I got some snapshots of what the rooms looks like at the moment. The room that will be used for my installation will use black or semi-black molton instead of off-white canvas.
Yesterday I ordered loudspeakers and amplifiers to use for the exhibition. I’ll be using 4 Amina flat panel speakers one for each of the walls.
Ideally I’d like the audience to be able to sit down in the room. I find that generally the audience don’t tend to stay for more than a few minuites in a sound installation unless they’re able to sit down and rest their legs. But the room is not that big and I’m afraid that if there’s something to sit on in the middle of it they’ll have to walk very close to the walls when getting into and out of the room and thus the speakers might be at risk. Also if there’s to be a bench or similar it have to fit in with the look of the room. One option could be to use two saccos. I’ll probably wait deciding until I’m in Oslo and get to see the room.
The meteorological data I’ll be using for Generator.x are daily means except for daily max and min temperature. The variations from one day to the next are fairly strong so using linear interpolation curves tend to look triangular. In order to get smoother transitions I’ll use cubic interpolation instead. I’ve just made an abstraction for cubic interpolation for data represented as FTM fmat matrixes.
|| Kunst fra kode – Generator.x
|| Konferanse utstilling og konsert om generativ kunst og design
|| 23-24 september Oslo Norge
|| Atelier Nord / Nasjonalmuseet / Hausmania
Generator.x utforsker estetiske strategier for en digital verden. Prosjektet presenterer en ny generasjon av kunstnere og designere som har omfavnet programkode som sitt viktigste verktøy. Deres kreative uttrykksformer er muliggjort av en ny forståelse av software som et estetisk materiale med potensiale for subjektive uttrykk og ikke bare som en teknisk hindring.
Deltagerne på Generator.x utrykker seg like intuitivt gjennom programkode og algoritmer som gjennom tradisjonelle – intuitive – prosesser som tegning. Inspirert av kompleksitetsteori og emergente prinsipper konstruerer de intelligente designobjekter visualiseringer av ufattelige datamengder og vakker abstrakt visualitet. De lager algoritmiske tegnesystemer som utvikler seg over tid lydkomposisjoner som aldri gjentar seg og nye audiovisuelle instrumenter for live performance.
Temaer som vil bli berørt inkluderer generativ estetikk prosessbasert design performativ software og kunstnerskapt programvare.
|| Generator.x konferanse: 23-24 sept Atelier Nord Lakkegata 55 C.
|| Pris: 700 for profesjonelle 350 for kunstnere og studenter. Inkluderer konsertbillett.
Forelesere: Erich Berger (NO / AT) Pablo Carranza (SE / SP) Gisle Frøysland (NO) Hans
Christian Gilje (NO) Susanne Jaschko (DE) Golan Levin (US) Sebastian Oschatz (DE) Casey
Reas (US) m.fl. Moderatorer: Amanda Steggell og Marius Watz.
|| Generator.x utstilling: 23 sept – 16 okt
|| Kunsthallen Nasjonalmuseet på Tullinløkka.
Nye arbeider av Lia (AT) Trond Lossius (NO) Sebastian Oschatz (DE) og Casey Reas (US). Arbeider på papir av Pablo Carranza (SE / SP) Lionel Theodore Dean (UK) Ben Fry (US) Jürg Lehni (CH) Golan Levin (US) og Martin Wattenberg (US). Kurator Marius Watz.
|| Generator.x konsert + club event: 24 sept
|| Hausmania Hausmannsgt. 42.
En kveld med audiovisuelle konserter hvor spesialbygd software brukes for skape lyd og bilde i synestetisk samspill.
Performance: Blind – HC Gilje / Kelly Davis (NO / US) Erich Berger (AT / NO) TinyLittleElements
– Lia / Sebastian Meissner (AT). DJ to be announced. Generative live visuals: GX crew / Spoon (CZ).
|| Komplett program i PDF-format kan lastes ned på http://www.generatorx.no/.
Generator.x er et samarbeidsprosjekt mellom Atelier Nord og Riksutstillinger etter et initiativ fra Marius Watz. Prosjektet er støttet av Norsk Kulturråd PNEK og Goethe-Institut Oslo.
Today I received the final data that i’ll be using for Generator.x time series with daily values for wind temperature precipitation and more for the period 2046-2065. I might possible live to experience at least some of those years.
The model used has a fairly coarse resolution with each grid point approximately spanning 12000 km^2. That’s a region of almost 110 × 110 km. The grid point that Bergen is a part of is for a large degree situated in the Norwegian Sea. For this reason temperatures are generally lower than I would expect.
A week ago ago I recieved a bunch of loudspeakers for testing from Soundscape Studios in Trondheim a small company specialising at advanced loudspeaker and sound systems and installations. It were mainly a variety of Amina and Wharfedale nxt type panel speakers.
I’ve been testing them at my studio and I really like what I hear. They don’t work all the way down to the low frequencies and have to be complemented with a sub woofer. Mounting two panel speakers on opposite walls of the studio in combination with a sub it sounded really convincing. I have been testing with a variety of sound sources and switching back and forth between these loudspeakers and my regular B& W 602 speakers to test for differences. The frequency responce seems to be quite good a lot better than I expected. The speakers are not able to transmit phase information and I was unsure what the effect of that would be for the listening experience. The over all impression is that the sound is not coming from a distinct sound source functioning as a point in the room but rather gives the impression of being emitted from a continuous field surrounding the speakers. The sound seems to be “in” the room and “take” the room in a different way from regular speakers. It was no problem distinguishing stereo information but at the same time the sound is a lot more spacious. Turning back to the ordinary speakers the sound seemed to come from “out there” not “in here” as for the panel speakers. Also I tended to hear the sound of both speakers regardless of my position in the room. In comparisment one loudspeaker tends to become much more dominant as I get close to it canceling out the signal from the other when using ordinary loudspeakers. To wind it up: A very interesting experience.
In addition the loudspeakers look different and invite new and interesting ways of visually integrating the speakers in an audiovisual installation setting.
I’m planning on using this type of speakers both for the Generator.x installation and the White-out installation at USF with Kurt Ralske.
The speakers require a 24 dB per octave high pass filter at 80 Hz or higher. After a bit of investigtion a 4th order Linkwitz-Riley crossover filter seems to be the high end standard for this kind of crossover filters. I found equations for it here (as well as equations for a lot of other kinds of digital filters) and set down to code a filter external for MaxMSP to use. tl.crossover4~ doubles as a high or low 4th order Linkwitz-Riley crossover filter. It’s included in a new update to tl.objects version 2.1.2 uploaded the other day. It can be downloaded at
It’s more or less the first time I’m touching C-code and programming MaxMSP externals in four years time so I felt a bit rusty but I got it working eventually. So far it’s only compiled for MaxMSP 4.500 at OSX as I don’t have a Windows compiler. It’s released as GNU LPGL with source code included.
One hundred years ago the Music Academy in Bergen was established. 25 years ago it received national status as the bergen Music Conservatory. 10 years ago it became The Grieg Academy – Institute of Music and a part of the University of Bergen.
There’s an article in Bergens Tidende today (in Norwegian) by Frode Thorsen on the history of the Grieg Academy as well as challenges for the future.
I’ve started to maintain my links using del.icio.us and find it quite useful. It doesn’t take long to register sites that I’m passing by it is easier to find back to stuff that have caught my attention ea
My links can be found at http://del.icio.us/lossius. There’s also a RSS feed available at http://del.icio.us/rss/lossius.
Here‘s is a visual representation of my interests lately based on what tags I’ve been using.
The common way of transforming an ambisonic b-format audio signal to binaural signal seems to be by first decoding the signal for a number of loudspeakers and then use convolution with head-related transfer-functions to transform those signals into binaural signals.
The Bformat2Binaural VST plug-in (PC only) use this approach:
In this plug-in four speaker feeds are derived from an Ambisonic sound field and those feeds are convolved with the corresponding HRTF for those speaker positions. The speaker positions are 0 90 180 and 270 degrees. Since the left and right ear HRTF are the same at 0 degrees there is one less convolution required. Same applies for the rear 180 degree position avoiding a total of two convolutions as opposed to eight required if the speakers were positioned at 45 135 225 and 315 degrees.
The Ircam Spat library have externals for transaural to binaural conversion: trans~ and trans4~.