March 6, 2011
Tomorrow and the day after I will be a guest lecturer at the music technology study in Trondheim. Tomorrow I’ll be giving a presentation talking about my own work on sound installations. Tuesday I’ll be giving a 4 hour workshop on sound spatialisation, introducing students to some spatialisation techniques, including VBAP, DBAP, ambisonics and ViMiC. The lecture and workshop is part of a course on sound in public spaces.
In preparation for the workshop I have made a new Jamoma module for monitoring the waveform of multichannel signals.
March 8, 2011
I’ve just finished two days of teaching in Trondheim. I’ve been doing the same lecture/workshop combo a few times before, but it always takes a different direction, based on the interests of the students and where their questions and enquiries take us.
This time I got a lot of questions on how I am composing for my sound installations, in other works, how I am organizing the flow of sound in time using algorithmic approaches to real-time processing. This is something I have spent a lot of time developing strategies for over the last 14 years, but for some reason I have not talked or written much on it so far.
This ventured into a discussion of the creation of layers by combining asynchronous layers in 2/1 from Brian Eno’s Music for Airports, the emphasis on simplicity in Fluxus, Ken Friedmans essay on 40 years of Fluxus, the tides of the harbor in Bergen, and the remark on my music by the late Gerard Grisey during a master class that I seemed to be striving for simplicity and complexity at the same time, and maybe it would be an idea to make up my mind (I feel this remark to be still relevant, in a productive way).
For the workshop 16 loudspeakers have been set up at a lab/studio. This will be a permanent installation, and make it a lot easier for the students to experiment with surround and 3D sound.
I believe we found a speaker setup that makes sense, and make the studio modular and useable with a number of different software and for different purposes. The studio has a ring of 8 speakers in the horizontal, and two additional rings of four speakers above and below. It is not sufficient to create true 3D illusions, but one could think of it as being able to create a degree of vertical width to the horizon, having sources emerging from above or below the horizon.
Speakers have been set up as:
1 – 2 |
Front Left – Front Right |
3 – 4 |
Front Center – Rear Center |
5 – 6 |
Rear Left – Rear Right |
7 – 8 |
Left – Right |
9 – 10 |
Front Left Upper – Right Front Upper |
11 – 12 |
Rear Left Upper – Rear Right Upper |
13 – 14 |
Front Left Lower – Front Right Lower |
15 – 6 |
Rear Left Lower – Rear Right Lower |
It would be nice to add a sub to the space as well. If so, and if one could switch between the Rear Center speaker and sub, the first 6 channels can be used with Logic Pro or SoundTrack for 5.1 production, as long as Logic Pro loudspeaker configuration is set to use ITU standard, rather than the Logic default configuration. Likewise the first 8 speakers could be used with Logic Pro and SoundTrack for 7.1 production.
This makes the room more modular, able to work with DAWs and linear sound editing as well as real-time environments such as Csound and Max.
We have been trying out VBAP, DBAP, ambisonics and ViMiC in the space, and used in the right way, they are all working well, though still sounding differently. The patches are left behind for the students to keep playing with.
March 9, 2011
For the upcoming Borealis festival I’ll be doing a crazy stunt, taking part in a 12 hour endurance performance with Le Jury. They will be improvising, and I’ll be doing video, based on measurements of their
brain activity.
I’m currently testing out the iCube X Wi-microSystem and BioWave. It seems like I have finally have managed to get it working properly and deliver some sensible data.
This is what bright ideas look like….
March 17, 2011
The Jamoma team is pleased to announce the release of the Jamoma 0.5.2 implementation for Max/MSP. This second revision of Jamoma 0.5 has been in the works for the past 6 months and brings a lot of bug fixes and new features from the exchanges and efforts of all the Jamoma community members. Please visit the download section to get a copy. Enjoy this new version!
Features include:
- A large and peer-reviewed library of modules for audio and video processing, sensor integration, cue management, mapping, and exchange of data with other environments
- Extensive set of abstractions that facilitates development and documentation of Max/MSP projects
- Specialized sets of modules for work on spatial sound rendering, including support for advanced spatialization techniques such as Ambisonics, DBAP, VBAP, and ViMiC.
- Modules for work on music-related movement analysis
- Powerful underlying control structures that handle communication across modules
- Strong emphasis on interoperability
- Native OSC support, thus making it easy to access and manipulate processes via external devices and interfaces
- Comprehensive documentation through maxhelp-files, reference pages and growing number of online tutorials
- Easily extendable and customizable
Jamoma is an open-source project for the development of audio/video applications, plugins and Max/MSP-like environments. It offers many C++ frameworks for structured programming and is based on modular principles that allow the reuse of functionalities where all parameters remain customizable to specific needs.
Jamoma is in development for more than five years and is used for teaching and research within science and the arts. It has provided a performance framework for composition, audio/visual performances, theater and installation gallery settings. It has been also used for scientific research in the fields of psychoacoustics, music perception and cognition, machine learning, human computer interaction and medical research (more info here ).
Jamoma is distributed under a BSD license and the sources can be freely downloaded at http://github.com/jamoma. Development is currently supported by BEK – Bergen Center for Electronic Arts, 74 Objects, Electrotap, GMEA – Centre National de Creation Musicale d’Albi-Tarn and University of Oslo. Further details can be found here.
March 20, 2011
Coming Saturday I’ll take part in a 12 hour performance as part of the Borealis festival:
What happens to a musician’s brain when under duress? For this event, Le Jury will improvise non-stop for 12 hours, during which the activity of their brains will be monitored by researchers from Bergen University. This data will be displayed to the public, as well as being used by electronic media artist Trond Lossius to create a live visual display.
I’ll be using BioWave sensors and iCube Wi-microDig bluetooth sensor systems for monitoring brain activity.
The sensor system seems surprisingly stable. I’m notorious pessimistic about how well (or if) sensor systems work, but in all of the testing done so far the sensor system has been totally reliable. Maybe it’s saving all the problems for the performance…
On the other hand I am struggling with the BioWave sensor itself. Occationally I have had it working and providing interesting data, but lots of the time the sensor signals are totally confused.
I’m currently in the process of jamomizing patches. The screenshot above shows three instances of the module interfacing between sensors and Max.
March 21, 2011
ArraySync is a networked QuickTime synchronizer for multi-screen presentations, offering playback of QuickTime content across multiple displays attached to one computer or over a local area network as if it were coming from a single video source.
The software is available from App Store. The server is reasonably priced, while the client software is free. All in all a compelling alternative to Dave Jones DVD synchronizers, and worth checking out next time there’s a need for synced multiscreen video.
Update
HC Gilje also pointed me towards the free Multiscreener. From the screenshots, it seems clear that Multiscreener is made in Max. I guess that while testing the application, one should check if playback is smooth or jittery (pardon the pun).
March 26, 2011
What happens to a musician’s brain when under duress? For the Borealis Avgarde Extreme event Le Jury (Ricardo Odriozola, Einar Røttingen, Jostein Stalheim) improvised non-stop for 12 hours, during which the activity of their brains was monitored by researchers from Bergen University. The data was displayed to the public, as well as being used by electronic media artist Trond Lossius to create a live visual display.