November 11, 2011
The studio used for the Trondheim workshop.
I’ve been doing some teaching over the past few weeks.
Two weeks ago I visited the music technology department at NTNU in Trondheim, giving an artist talk sharing my experiences with work on sound installations. The following day I gave two 4 hour workshops providing an introduction to spatialisation. They have a studio set up with 16 loudspeakers, but most of the students seem to work predominantly in stereo only, with occasional excursions into 5.1. I used Jamoma for the demo. While Jamoma itself was not the topic of the workshop, it provides an easy way of demonstrating and discussing different spatialisation techniques. Within the time frame of the workshop we managed to touch on VBAP, first and higher order ambisonics, DBAP and ViMIC.
Last week I was in charge of a one day workshop/concert on sound in space organized by BEK. For my presentation I did a concenced version of the same material as for the Trondheim workshop, with a brief introduction to the psychoacoustics of spatial hearing, and a demo of the Harpex ambisonic decoding plugin thrown in for good measure. Thorolf Thuestad talked about how he is using Reaper for spatial audio. From his presentation it became clear that Reaper will be my choice of DAW for the foreseeable future. After a break works by Ruben Sverre Gjertsen and Hans-Gunther Lock were presented.
Last Monday I gave a workshop on how to write application grants for VISP, a service organization providing assistance on production for visual artists in Bergen.
And finally, yesterday I gave a two hour lecture on electronic music at the Grieg Academy.
Now I’m looking forward to be able to spend some time at my studio again.
On a side note: This turned out to be be blog post no. 1111, posted on 11-11-11.
November 15, 2011
Me and Karen Kipphoff are about to start work on a new project that might become a hybrid stage/installation work, although it is all still quite open.
In the process Karen has started looking into Isodora, as a possible alternative to running live video from Max and Jamoma. I’ve also started taking a look at some of the tutorials. Isadora is a patch-domain program for live video processing, and as such resembles Max, although the modules (or “actors” as they are called) are higher-level functional processing modules rather than low-level building blocks.
The developer, Mark Coniglio, is clearly experienced at live video for stage productions, and it is interesting to take a closer look at the strategies employed for managing patches over time in performance.
One very interesting idea is how patches are conceptualized as scenes. As the play or performance progress, one will move from one such scene to the next, with cross-fading transitions between scenes in much the same way as in e.g. PowerPoint or Keynote.
In Max, I have always ended up with one humonguous patch containing all processing that will ever be needed all the way through the performance, and lots of logistics to handle routing and muting along the way to save CPU and GPU.
One important change in Max 6 is that each top-level patch does audio processing in its own separate thread. Hence audio can be turned on and off independently for each patch. It might be that this opens similar possibilities for patch substitution during performance: All patches could be loaded up-front and positioned off-screen. As they are to be used, they could be moved on-screen, get audio turned on, and then be faded in. Moving on to the next patch, the previous could be faded out again, and then be moved off-screen again.
One could of course also dynamically open and dispose of patches, but based on prior experience, I’m sort of hesitant towards the idea of doing massive memory allocation and freeing in the middle of the heat unless I have to. Anyway all of this is well worth investigating further, and maybe bring into the discussion of possible future directions for Jamoma development.