NewsWorksSoftwareTextBioContact
background image

Blog archive for December 2005

Verdensteatret at PS122 NY

December 2, 2005

VERDENSTEATRET (NO) presents
CONCERT FOR GREENLAND

7 and 8 December at 20.00.

PS 122 in New York
Performance Space
150

For tickets: http://www.ps122.org
or call Theatermania at 212 352 3101

CONCERT FOR GREENLAND

THAT CORPSE YOU PLANTED LAST YEAR IN YOUR GARDEN HAS IT BEGAN TO SPROUT
WILL IT BLOOM THIS YEAR?”

Verdensteatret is recognized as one of the leading Norwegian contemporary theatre/performance ensembles particularly well known for their original and unexpected use of new technology in live art. Verdensteatret is a collective of artists from different art fields and they have made the performance into a complex polyphonic instrument a “live animation machine” consisting of mirror projections robots motorized objects and more. The performance was hailed by critics as the best performance in Norway 2004.000 Verdensteatret has been touring Europe for many years but this is the first time they are performing in US.

“Concert for Greenland” is thematically based on the company’s travel and research around the Northwest-Atlantic islands in 2003: Greenland Iceland and the Faeroe Islands as well as Norway. It is also an expedition through soundscapes language and visual transformations an audio-visual composition where rusty mechanics meet new technology on the backside of a “video-shadow-theatre” on Greenland an “any media nescessary”-performance where visual art sound video installation text and theatre blends into a complex composition.

From the reviews:

“Beautiful perilous peculiar. It is likely you have never seen anything like it.” (Bergens Tidene)

"At times we face an experience that does not deal with the parameters with which we usually categorise contemporary art. “Concert for Greenland” produces such moments when art ceases to be a catalyst for further critical reflection and instead turns into an aesthetic experience in its own right. Such an experience goes beyond conceptualization. (….) I will rather consider it a machine of some kind. Through an insanely complicated yet naively simple system of rails electro-acoustics sound samples video projections and shadow pictures they put the machine in motion. Every picture and every movement is duplicated and cast around in the machine. The machine works for around an hour. I don’t think I can tell how it works I believe this is a work of art that has to be experienced…" (J. R. Moe: www.kunstkritikk.no)

Artists involved in creating and performing “Concert for Greenland”:

Asle Nilsen Lisbeth J. Bodd Håkon Lindbäck Piotr Pajchel Petter Steen Ali Djabbary Per Flink Basse Corinne Campos Øyvind B. Lyse Lars Øyno Morten Pettersen Bergmund Skaaslien Christian Blom Trond Lossius Kenneth Kårstad b.avant.

“Concert for Greenland” will be part of a program consisting of 5 Norwegian performing companies presented by Network for Performing Arts – Norway. The event is supported by Arts Council Norway and The Norwegian Ministry of Foreign Affairs in cooperation with Norwegian General Consulate In New York.

More info about Verdensteatret and “Concert for Greenland” can be found at

http://verdensteatret.com

Eigenwelt der Apparatewelt catalogue

December 5, 2005

In 1992 Woody and Steina Vasulka curated a retrospective called Eigenwelt der Apparatewelt for Ars Electronica. The catalogue is an inspiring document with plenty of amazing project descriptions interviews and original sources. It’s available as a pdf at the home page of Joshua Goldberg.

Combining multimedial material

December 5, 2005

- multimedial orchestration

- multimedial monophony

- multimedial homophony (Sons &amp Lumières)

- multimedial polyphony

Playing around with ambisonics

December 7, 2005

Justin Bennett has been in Bergen for the last week doing supervision with students etc. When I heard that he would be coming I managed to reserve him for a day so yesterday was spent in ambisonics nerd paradise. We’ve both been playing around with ambisonics for a while in order to see if we can find meaningful ways of using it in projects. So far I’ve only been working on mono and stereo sound sources that are mapped to a ambisonic field. Justin have gotten hold of a old SoundField microphone and he was kind enough to bring it to Bergen. We set up a horizontal hexagonal loudspeaker setup and started testing various ideas and techniques.

Justin played sound from several takes and projects where he have been working on ambisonics. He has been doing a series of sound time lapse projects mounting the mic somewhere usually at a roof or similar and then recording for a minute or so every half hour for 24 hours. The resulting recording he has compressed to a sound piece lasting e.g. 8 minutes scanning the sound of the city.

Justin also showed me a program (max-patch) he has been working on for quite a while and that has been used in several of his projects. Various sound sources are positioned at various spots in a two dimensional horizontal plane. By moving around in the plain the various sound sources will come near or move away from you. There is also the possibility of moving the sound sources rotate the sound sources (that have stereo information) and rotate the position of the virtual listener. I was very impressed by the what it sounded like. I’m not sure that I would hear or understand that I’m moving between various sources but I still got a strong impression that sources were near or far away and the change from listening to one source to another was extremely organic and plastic.

When I first started working on multi-speaker setups it was motivated by a desire to create a situation where the audience would be situated inside the sound. This technique definitively gave that impression.

Listening to surround recordings was a very different impression and I was surprised and maybe disappointed. It was a great richness in where the sound originated from but I always had a very strong sensation that the sound was coming from out there some sort of sphere surrounding me at various sides but never becoming one with me. I felt that the boarder between me and the sound was defined to a much higher degree. This is not to say that the technique was not effective it definitively was. But I was caught by surprise at how strongly I reacted to the resulting impression aesthetically it was very clear that this was not what I want rather the complete opposite. I felt a Descartian distinction or separation of mind and body that I was very uncomfortable with.

Later on I showed some of what I’ve been working on but first I had to show him the hipnoscope user interface of the Hipno plugins as there were so many resemblances between the idea behind that interface and how Justin have been positioning sound in space. Apart from that I showed him how I’ve been encoding stereo sources to b-format and the ambisonics objects by jasch and the ones developed at Centre de recherche Informatique et Création Musicale. For some reason the objects by jasch did not manage to position sound convincingly in the space. I don’t know why I’ve had them working well before so I’ll have to investigate further.

We ended the day by doing some b-format recordings that I can keep for further testing. I’ve always loved the sound of the cars passing by on the street down from my office. Today was the day to capture it.

Update: Looking into the externals developed at Centre de recherche Informatique et Création Musicale I discovered that they are now released with a GNU LGPL license. This implies that it will be possible to modifiy the code for inclusion in Jamoma.

Documenting sound installations

December 7, 2005

For some years now I’ve been doing a series of sound installations in collaboration with other artists mainly visual artists. For a large part the installations have been presented in various galleries. I’ve always found it difficult to document the installations. Visually it generally works well to use video as I’ve often been collaborating with artists using video as a medium. But I find the audio part to be really difficult. I’ve generally been using multi-speaker setups of various kinds to position the audience inside the sound. Reducing the sound image to stereo is already a major reduction of the impression you’ll have of the sound as compared to being at the site. In addition the audio recorded at the gallery space while the installation is running use to be completely useless containing a lot of disturbances and noise. My sound installations generally work with low volumes as I often like working with sound that is barely perceptible. If I instead do a stereo mixdown of the sound the sound on the documentation of course is better but I’m also loosing any sensation of the sound being related to or in the room. Thus when watching the documentation it is a much more detached experience than if I’m using a recording from the space.

So I’m wondering: Are there anyone else working on sound installations that have found better ways to approach this?

Thanks
Trond


(mail to the microsound list)

Djalma Primordial Science

December 8, 2005

Djalma Primordial Science is the seven year collaboration of sound artist Jeff Gburek and dancer Ephia. Intensely committed to improvisation (as a way of life) their unpredictable approach to performance seeks to deepen and defy itself through a highly focused anti-expressivity and liminality. Djalma Primordial Science is one of the few companies in the world that have consistently united experimental movement theater and live electro-acoustic music with the underlying belief that a unity exists in them both rather than one being an adornment to the other.

Harold Pinter’s speech

December 9, 2005

Harold Pinter’s acceptance speech for the Nobel Prize for literature is available here.

La semaine du Son (The Week of Sound)

December 9, 2005

Taking place in Paris January 10-14 2006

Sounds are everywhere in our daily life in supermarkets to promote and advertise products in shops to create ambiance in parking to set people’s mind at rest. New and developing technologies allows a high sound quality even in our car. Walkman MP3 home cinema all are available to a large public and provide more and more powerful decibels impacting the customers. Paradoxically the budget part allowed to sound in public and private investment remains very poor most of the time. The consequence being an excess of sound causing damages that are only discovered when it is too late.

The general public is not always aware of the role and limits of the audition system. Partial deafness is one of the consequences causing important costs for the medical care system and limits access to certain jobs where a perfect hearing is compulsory. Noises in our day to day life is also a source of nuisance. La Semaine du Son intends to raise issues on soundproofing and study new building material reducing noises. It intends also to educate children in elementary schools and promote the use of sound and musical therapy as a tool to open up and fight stress and fatigue and lead to a better concentration of minds.

The aim of the non profit association La semaine du Son (The Week of Sound) is to awaken public awareness about sound and the need for quality in our sound environment. It is a center of knowledge promoting and encouraging research on technical cultural and industrial as well as health aspects of sound. As part of its ambition to broadcast information on sound La semaine du son intends to encourage the studies of sounds create an international European and national network of proficiency focusing on sound and sound related studies in favor of creating new diplomas in sound techniques.

Current status of electronic arts

December 9, 2005

I recently received a mail from Wilfried Agricola de Cologne sent to everyone that has been involved in®®[F]2005—→ XP in one way or another. Among other issues he is taking some time to reflect on the current state of electronic arts as 2005 draws close to an end.

.

—→ 2005 – A review

Dear friends in art


in 2005®®[F]2005—→XP continued its sucessful development
and nearly 50 events were organised focussing on®®[F]2005—→XP
or its components since its start in March 2004
http://netex.nmartproject.net/index.php?blog=8&amp cat=55
At the end of the second year it is time again to thank all you
whether you are participating as artists curators or corporate instances like organisations or institutions
or whether you belong “just” to the audience a very large audience fortunately
for all the confidence in what I was initiating by realizing the RRF project.


The year 2005 was very turbulent and allowed me to extend my activities in physical space
my project related travels to Israel and Palestina in February/March 2005 to Italy in June 2005
and most of all the most recent travel to South America (Argentina Uruguay Chile and Brazil)
in October/November 2005 see also the event series project IMPACT http://impact.newmediafest.org
They were the results of the sucessful ideas of networking
thanks to all who followed them.


By attending also a lot of prominent and less prominent festivals
I got an idea of the current stand of media art on a global level
and not all what I could see made me really enthusiastic unfortunately.


I recognised a certain stagnation in nearly all fields of New Media art
especially in what is called “electronic art”.
In “net based art” the stagnation became visible already earlier
and at the end of 2004 I stopped my activities for developing JavaMuseum
www.javamuseum.org further on as you know.
Although I see an unexplored potential especially in “Internet based art”
most artists working in this field however reduce the thematical aspects
just to reflecting the Internet thus themselves and following this during many years
it is just boring and no audience in the world will give such a direction any chance.
Of course this ideological restriction is also supported by certain curators
and other people who follow personal interests in this field of art.


Something similiar can be also observed on other areas
but eventually not the art and the artists as such are in a certain crisis
but the system itself.


I would wish that artists would use their creative potential much more and experiment
and would determine the course of an art development by themeselves this way.


The most promissing and developing fields in the framework of RRF project
became in 2005 SoundLab Channel ( three editions) and
VideoChannel (meanwhile more than 16 curators and 150 artists)
especially VideoChannel included a lot of new exciting artists and artworks
and was beyond that very successful in presenting numerous selections in media exhibitions and mediafestivals
see also list of RRF venues.


By posting a call for®®[F]2006—→XP
http://netex.nmartproject.net/index.php?blog=8&amp cat=54
I would like to invite artist working with New Media especially the field of “Internet based art”
to overcome ideological limitations in this unique project environment
and I would wish they would use the networking potential of this environment more consequently
than in 2004 or 2005.000 The deadline of 1 March 2006 gives even time and space
to consider the creation of completely new works.


Many thanks again to all of you.
I wish you a “Merry Christmas” and a “Happy and Successful New Year”
yours sincerely


Wilfried Agricola de Cologne
[NewMediaArtProjectNetwork]:||cologne


Alessandro Fogar: Soundframes_Trieste

December 9, 2005

Alessandro Fogar recently presented Soundframes in Trieste istituto Nordio Soundframes a work using Jamoma. The project suggests listening stations located at specific locations of the city selected for their acoustic characteristic. The project attempts to stimulate an awareness of the sonic landscape. The author in this case does not impose his thoughts on the work but simplyselects a place for listening and suggests the perceptive and behavioural modalities.

Alessandro Fogar also did a performance comprised of the recordings of the sounds of all the ’ Soundframes’ near the Nordio institute of Via di Calvola 2 Trieste at 1 December 2005.000 For this he used Jamoma and some custom modules I made for ambisonic positioning of stereo sources.

A day of sound art

December 9, 2005

Coming Sunday Landmark will be the venue of a number of short sound installations and happenings. For a while a bunch of people in Bergen interested in sound art has been colaboratingto try to establish a venue of some sort for sound art. Sunday will be a test flight. Steinar Sekkingstad and Erlend Hammer has currated a series of events named “vertical moments and linear progressions”. More info in Norwegian below.

L Y D G A L L E R I E T P R E S E N T E R E R:

Vertikale øyeblikk og lineære forløp- en dagsutstilling om lyd i rom. Landmark 11.000 desember 2005.000 Kl. 11.00- 18.00.

Med: Bjørn Askefoss Bjørnar Habbestad Leif Inge Jørgen Larsson Nicholas Møllerhaug Maia Urstad og Jana Winderen.
Kuratert av: Erlend Hammer og Steinar Sekkingstad

- Søndag 11.000 desember utvides cafe-opplevelsen på Landmark med det ypperste innen norsk lydkunst. I foreløpig mangel av et eget rom gjør Lydgalleriet sin pilotutstilling der det meste av Bergens interessante lyd allerede skjer. Og mens Fusakongen abdiserer og Kunstmisjonen gir opp sitt kall så trer en ny aktør innen vestlandets kunstformidling ut på scenen og bukker beskjedent for en lydhør forsamling.

Dagen begynner kl. 11.000 med kuratorene i samtale om prosjektet og om problemer knyttet til hvordan man presenterer lydkunst på best mulig vis. Deretter får kunstnerne en time hver til å boltre seg og presentere verk i grenselandet mellom konsert og installasjon for å utforske rommet mellom den tiden verket tar og den tiden publikum møter verket.

Utstillingen tar sikte på å vise bredden av og kvaliteten på det som skjer innenfor Bergens veletablerte lydkunst-scene samtidig som vi har invitert noen hyggelige gjester over fjellet. Enkelte av kunstnerne er musikere som nå vil gjøre installasjoner andre er installasjonskunstnere som eksperimenterer med et mer konsertlignende format. Felles for alle er ønsket om at lyden og rommet skal spille på lag og at viljen til å vinne er større enn frykten for å tape.

Hjemmelaget. Jørgen Larsson vil enten gjøre matspesifikk lyd basert på Landmarks meny eller han vil kondisjonere oss til å sitte stille og holde kjeft. Maia Urstad dirigerer en kassettsymfoni på bordene og Nicholas Møllerhaug gir oss fuglekvitrende pausemusikk mens Bjørnar Habbestads Tel-Art tilbyr et innblikk i hvordan folk svarer når de tar telefonen.

Bortelaget. Bjørn Askefoss presenterer sin cafe au lait og croissant-installasjon Alces Jana Winderen presenterer gruppeprosjektet freq_out i lineær stereomiks og Leif Inge har verdenspremiere på sitt nye strekkprosjekt An Hour Hit. Sist var det Beethovens 9.000 denne gang er det yngre og mange vil nok si lettere komposisjoner som strekkes. Tittel denne gang er – Nææ … – Dessverre gjør moderne copyrightlovgivning at vi ikke kan si noe mer om akkurat den saken. Lyd blir det lell.

Velkommen til en (litt) annerledes dag på Landmark!

Kontakt: Erlend Hammer (91746702) Jørgen Larsson (48237888) Steinar Sekkingstad (93839623).

Sweet Promotion Girls

December 10, 2005

Most spam and e-mail advertisements I read and/or ignore but today I had to check the calendar to see if this is still 2005.000 I received a mail from Creative that will be using 60 “sweet promotion girls” to demo stuff in shops around Scandinavia. I’ve forwarded it as a complaint to The Gender Equality Ombud.

Searching the Max archives

December 11, 2005

travis just added one more reason for using Firefox: A handy plug-in for searching the Max archive from the search box on the toolbar.

Realtime

December 12, 2005

Real Time + On-screen seems to be a very interest online/print magazine on film performace dance visual arts music sound etc. Yesterday Jeremy sent me an interesting article on the sound artist Ernie Althoff.

tl.objects for Windows

December 12, 2005

Yesterday f.e. mailed me to say that he has been compiling all of the tl.objects for Windows. I really appreciated this! Hopefully we will have them available for download in the near future.

Ambisonics explained

December 13, 2005

Starting of by discussing the difference between a velocity mic and a pressure mic Eric Benjamin the other day offered a really good explanation of the basic consepts of ambisonics at the sursound mailing list. I’m quoting it all below in order to be able to store it for later reference:

A sound wave has both pressure and velocity (or ‘particle velocity’) components. This is all just jargon until you understand what it means. If you put up a pressure microphone (which has omnidirectional polar patterns) it only senses the pressure of the sound waves. As the wavefront passes the microphone the pressure will go up and down and the signal from the microphone will go up and down in a corresponding way. But the microphone doesn’t ‘know’ what direction that wave came from.

Think of the old expanding ripples in a pond analogy. If a rock is dropped into the pond the ripples expand outwards in every direction until they hit the edges of the pond – but we will imagine that the pond is infinite. If we observe a leaf floating in the pond as the wave goes by it just goes up and down. From observing the leaf we can’t tell anything about the direction of the wave. But as observers from above we can see that the wave does have direction to it. From the position of the leaf we can point back to where the wave started – where the rock was dropped in.

The motion of the leaf is like the output of the omnidirectional pressure microphone. It has no direction only up and down.

Now imagine a hair suspended vertically in the pond. As the wave goes by the hair will be deflected. And it will be deflected In The Direction of The Wave! It gives an indication of the direction of the wave propagation. Note that that direction depends on the position of the observer. Its an “it came from over there” sort of thing and depending on where you happen to be sitting the direction could be different.

I used the analogy of the hair deflected by the wave because that’s how insect’s ears work. Ours are pressure sensors. We can sense direction primarily because we have two ears. (even though persons who hear with only one ear can still manage to sense direction via other means.) We or rather our hearing look at the differences in the pressure signals at our two ears and from that determine the direction of arrival.

The method of the vibrating hair is like the ribbon microphone. The reason that the hair moves and the reason that the ribbon moves are the same. When the wave reaches the ribbon and flows around it it creates a small zone of increased pressure on the near side and a sort of shadow of decreased pressure on the far side. It is the difference in pressure between the two sides of the ribbon that makes it move and only when the ribbon moves can the fact that there was sound be sensed.

But wait! I thought that the ribbon was supposed to be a velocity sensor. And now we’ve decided that it’s pressure that makes it move. That’s true. It’s the difference in pressure between front and back mathematically described as a pressure gradient that makes the ribbon move.

To codify our terminology:

a pressure microphone senses pressure and as a result it has a fundamentally omnidirectional polar pattern. It can’t tell what direction the sound came from.

a velocity microphone senses the pressure gradient which is proportional to the acoustic particle velocity. The microphone has an output which is a cosine function (figure-of-eight) of the direction of the sound. Positive for sounds from in front zero for sounds from the sides and negative for sounds from the rear.

Two of these placed at right angles to each other give a full description of the acoustic particle velocity in the plane of the microphones. Three of them at right angles to each other give a full description of the particle velocity in three-dimensional space. Add an omnidirectional pressure microphone to the mix and you have a complete physical description of the sound field at a point in space.

What is this all about with regard to the velocity/pressure ratio? How are these things measured?

Now we know about acoustic pressure and particle velocity. And we know that our ears aren’t directly sensitive to particle velocity but we can still sense direction by measuring the pressure gradient between the two ears. And this is a big part of how we sense the direction of a sound.

So to get the reproduced sound right ideally we would want to get both the pressure and the particle velocity right. We would like for the pressure and the particle velocity at the listener’s position to be the same as the pressure and the particle velocity were in the recording venue. This is what Ambisonics can do and what 2-channel stereo can’t do. But obviously 2-channel stereo works pretty well. Otherwise we wouldn’t spend so much time listening to it.

In acoustics the pressure and particle velocity are wrapped up in a combination called acoustic intensity. There are devices made by Bruel and Kjaer and by one or two other companies that are called intensity probes and that is what they measure. In effect an intensity probe and a Soundfield microphone are really the same thing. It’s just that the first is a laboratory device intended for measuring things and the second is a recording microphone.

I assume that the figure-eight components of the B format remain pure velocity mics. Are there real world pure velocity mics that could be used to record B format directly (along with an omni pressure mic for W)? I’m not sure about the word “pure” in your question. The cardioid or subcardioid capsules in the Soundfield microphone have a mixture of pressure and pressure gradient sensitivity. When the capsule outputs are matrixed together what is recovered are the pressure and three pressure gradient components.

In that sense the output of the Soundfield microphone is exactly like the output of an omnidirectional (pressure) and three figure-of-eight (pressure gradient) microphones. But better because the Soundfield microphone has output signals that behave as though all four of those microphones were exactly coincident in space.

Are there real world pure velocity mics that could be used to record B format directly (along with an omni pressure mic for W)? Why sure! You can use your stereo ribbon microphone to recover the horizontal velocity components of B-format and add an omni to get the W component.

Thomas Chen and I wrote an AES paper (AES preprint 6621 The Native B-Format Microphone Part I) on this subject and boiled down the thrust of that paper is that you can record B-format in the way that you suggest. We were trying to encourage people to do just that. In some ways this technique is inferior to a Soundfield microphone for two reasons. One of them is that you can’t make the discrete microphones be truly coincident. The other one is that the polar patterns of real-world omnidirectional microphones aren’t all that omnidirectional! In some ways the “Native B-format microphone” is better. You get to use the microphones that you prefer.

Perhaps most important is that it may make the difference between having a B-format recording or not having a B-format recording. That argument supercedes a lot of the theoretical BS surrounding what is best.

For those who are interested in the subject of recording B-format with discrete microphones Thomas and I plan a second paper detailing the results of our listening tests. That paper is planned to be presented at the 120th AES in Paris this spring. Wish us luck. We have a lot of work to do.

Eric Benjamin

Eye and Blink Detection

December 13, 2005

Ami Chatwani has posted his MaxMSP/Jitter based work on eye and blink detection to his blog. The blog also contain other posts of Max relevance.

Ambisonics wiki page

December 14, 2005

While catching up with loads of past mails at the sursound list I added a page on ambisonics to the Maxnerd wiki.

Self presentation by Bjørn Askefoss

December 15, 2005

The composer and sound artist Bjørn Askefoss is doing a self presentation today at 1300 at BEK. His installation À lces has been presented at Landmark for a week.

Images and sound from Traced Spaces

December 18, 2005

“Staged Spaces” by Karen Kipphoff and myself is currently exhibited at Bergen kunsthall as part of the “Tracing Spaces” exhibition. December 20 is the last chance to see it.

Traced_spaces_1

Traced_spaces_2

Last Monday we took the opportunity while the gallery was closed to do some documentation. We’ll continue tomorrow. Above are two photographs Karen took. I had rented mics for the occation but one of them was broken so I had to use some I found at Landmark instead not of quite the same quality. I turned the sound way up before recording to increase the signal to noise ratio in the room during recording. You can still hear a door slamming occationally in a room next door to the space. I tested different mic setups: coincident pairs near-coincident pairs and spaced pairs. Tomorrow I’m also going to do binaural recording.

The sound documentation from some of the projects I’ve been doing lately is really horribly most noticeable from the “White-out” installation at USF with Kurt Ralske in September/October and “Quiet and relaxed but alert” with Jana Winderen at Galleri F15 last summer. This has led me to realise that I have to do a systematic investigation of how to better document the sound of the installations I’m doing. When the installations are done outside Bergen limited time of course becomes a problem. I need access to the gallery space after the opening while it’s closed to the general public in order to avoid more noise than necsessary leaking into the recordins. I also need to get myself two good quality directional condenser mics of the same kind for the purpose.

Listen

New and updated audio software

December 27, 2005

Praat 4.400 was released recently.

PWGL is an OpenGL based visual programming language specialized in computer aided composition and sound synthesis. PWGL is a successor of PatchWork (PW) and aims to develop and improve many of the useful concepts behind PW. Available for Mac OSX and Windows XP.

DVD Audio Solo is a fairly cheap (US$ 35) application for authoring DVD-audio. Supports up to 6-track 192kHz 24bit formats.

On to MaxMSP stuff:

Emanuel Jourdan has announced ejies 1.54 a MaxMSP library with loads of JavaScript goodies.

The boids flocking algorithm by Craig Reynolds has been treated to some new implementations in MaxMSP: The original object by Eric Singer has been licensed as GNU LPGL. jasch and andré sier have expanded the code to two and three dimentions. Wesley Smith has implemented the algorithm in Jitter. The new packages can be found here:

http://www.mat.ucsb.edu/~whsmith/
http://www.s373.net/code/boids_max_jitter_osx.zip
http://www.s373.net/code/boids_max_jitter_win.zip

André Sier has also made a library named a-objects.

As part of his PhD work Alexander Refsum Jensenius has released Musical Gestures Toolbox v.0.4.5 a while ago.

del.icio.us feed added to the blog sidebar

December 27, 2005

Today I finally got around to add a del.icio.us feed to the sidebar of the blog. I started using del.icio.us a while ago to maintain links. I believe this somewhat changed the blog content. I have a feeling that I seldom post interesting links to the blog itself. Instead I have been bookmarking them at del.ici.us in order to be able to find them again later. To compensate I’ve wanted to add a sidebar displaying the most recent del.icio.us bookmarks. The reason I have not done so earlier is that I thought it would be technically quite difficult.

Fortunately it turned out to be a lot simpler than I thought. At del.icio.us there’s a description of how to include a javascript in a web page to display the feed. Inserting this into the CoreBlog skin turned out to be pretty straight forward and all that was left was some tweeking and additions to the CSS for layout reasons.

s

Ambisonics equations

December 28, 2005

It is not difficult to find equations for how to encode a mono source onto 1st order (B-format) ambisonics. Unfortunately it’s much more difficult to find equations for decoding. Furse has made available decoding parameters for some standard symetrical loudspeaker setups but from there to a more general solution it is hard to figure out what to do. Also his equations is nothing but a gain matrix there’s no taking into account phases or shelf filters. (BTW there are some details on hilbert and shelf filters in this mail) Malham discuss the issue in an article in Organized Sound 3(2) 1998.000 According to that article one of the most thorough discussions of decoding was done in an article by Gerzone in 1992 (Gerzon M. A. 1992.000 Psychoacoustic decoders for multispeaker stereo and surround sound. Preprint No. 3406 92nd Convention of the Audio Engineering Society).

There’s also a lengthy discussion on decoding and copyright issues in the Sursound mailing list archive from July and August 2002 but unfortunately with little or no details on how to do the decoding. It turns out that technology for decoding to an array of irreguarly positioned loudspeakers generally known as the Vienna technology is patented by Trifield Productions Limited. The actual patent can be found here: US patent United States Patent 5757927: Surround sound apparatus issued in 1998.000 The patent holder Geoffrey Barton gives the following abstract of the patent:

The Vienna decoder could be summarised as an Ambisonic decoder where the values
of thetaV and thetaE are arranged to be substantially the same but that (a) high
frequency rv varies with direction and (b) the pressure directional gain pattern
varies with frequency thus giving the desirable aspects of an Ambisonic decoder
but (in the case of a 5.100 decoder) preference to the front stage.

Items (a) and (b) distinguish it from conventional Ambisonic decoder solutions.

Searching through the list archives of the sursound mailing list I also found some equations describing how to do forward dominance a kind of “zoom control” that moves the listener forwards or backwards in the sound field. One of the possibilities that I and Justin discussed while he was in Bergen was the ability of moving around in a field of surround sources. This is the closest I’ve come to see any equations expressing how to do it.

Quintet.net 2006

December 29, 2005

Georg Hajdu has announced the availability of a Quintet.net update both for Mac OS X (10.3 and greater) and Windows XP:

Quintet.net is a unique networked performance environment with an open architecture and can be used for microtonal algorithmic composition and/or full-fledged multimedia performance.

New microtonal tunings as well as modules for algorithmic processing of control data can be easily added to expand the features of the environment which consists of five tightly integrated components.

The installers can be downloaded for free from:
http://www.quintet-net.org/dl-frames.html

Georg Hajdu has announced the availability of a Quintet.net update both for Mac OS X (10.3 and greater) and Windows XP:

Quintet.net is a unique networked performance environment with an open architecture and can be used for microtonal algorithmic composition and/or full-fledged multimedia performance.

New microtonal tunings as well as modules for algorithmic processing of control data can be easily added to expand the features of the environment which consists of five tightly integrated components.

The installers can be downloaded for free from:
http://www.quintet-net.org/dl-frames.html

Sensorwiki

December 29, 2005

www.sensorwiki.org is a Wiki reviewing various sensor technologies. Seems like a valuable resource.

via Alexander’s blog

How to decoder ambisonics

December 30, 2005

The quest for decoding ambisonics continues…

I mailed Natasha Barrett to ask what she is doing she is the leading expert at ambisonics in Norway. At some stage she will be doing a 3 day workshop on ambisonics at NoTAM. I’ve signed up for it.

Natasha recommends the encoding/decoding VSTs by Dave Malham and Ambrose Field as well as the Panorama and Emigrator plug-ins. Due to channel routing issues they should be used in MaxMSP Bidule or simular. I know that Alessandro Fogar used Emigrator for Soundframes. Natasha herself tends to use Vspace and Ambidec under LINUX. I have Agnula running at my PC but haven’t had the time to really start figuring out how to use Linux so far. I suppose this is another good reason for get going at it…

So far I’ve always tried to use B-format in fairly non-standard speaker setups. I suppose I should mount some standard speaker rigs in order to test ambisonics in more well established setups as well where it hopefully is easier to get it to excell. But the problems of the vst plugins mentioned above still reimain that the number of loudspeakers are limited (8 and 12 respectively) and they only work for certain standardized setups. I also want to be able to decode for less symmetric setups with up to at least 18 speakers.

I’ve also posted some questions to the Sursound list and got some useful replies back:

In general the coefficients of the decoding matrix are calculated thus:

P(a e) = zerobase * W
        + firstbase * ( cos(a) * cos(e) * X + sin(a) *cos(e) * Y + sin(e) * Z )

In the case of “matching” 1st order coefficients as given by Furse the following base is used:

  • zerobase: sqrt(2)/n
  • firstbase: 2/n

where n is the number of loudspeakers. For 1st order controlled opposite the base is:

  • zerobase: sqrt(2)/n
  • firstbase: 1/n

In both of the above the base is inverse proportional to the number of speakers. This will keep overall volume consistent but in the digital domain the volume of each channel will get lower as more channels are added. The disadvantage of this is that the unused headroom will increase as more speakers are used and reduce the signal to noise ratio. It would be better to keep the signals hot and instead turn down volume at the amplifier. That way the best possible fidelity is maintained.

According to jasch Dave Malham did empiric experimentation in Zurich a while ago with a 8 speaker setup and ended up with a fixed base:

  • zerobase = 0.823242
  • firstbase = 1.0

The ratio of this base is somewhere between “matched” and “controlled opposites”. I asked if there are any strict criteria concerning what values to use for zerobase/firstbase or if I have to experiment with various values from one setup to the next in order to find the best one for the setup at hand. Dave Malham replied:

There are ways of optimising for particular layouts some of which are very
powerful like Bruce’s (Bruce Wiggins) but I would only ever use these as a starting point
in practical room since local acoustics will almost certainly cause
deviations from the theoretically.

Surfing the net I’ve had the impression that more advanced decoders also would implement shelf filters so that the base would differ for low and high frequencies. Again according to Dave Malham you would “only really use these for small rigs where you only need to have a small sweet spot”. How small is “small”? Should I consider a gallery space of 8×8 meters “small”?

I’ve also had the impression that more advanced decoders would take into account 90 degrees phase offsets of some of the B-format components probably implemented using a hilbert filter. Once more according to Dave there are no 90 degrees phase offsets in B format only in UHJ so I
don’t need to worry about this. Good! Some CPU cycles saved…

 

…consider this an open thread.