A new beta version of the ICST
ambisonics tools is available for download on the ICST website
Some highlights are:
Furse-Malham or N3D encoding/decoding
Up to eighth degree in N3D (the corresponding b-format has 81
channels!)
Unified messages and attributes for ambiencode~, ambidecode~ and ambipanning~ externals
Four different coordinate systems in spherical or cartesian definitions are supported (see also SpatDIF.org)
New distance attenuation algorithms, including none
Correct b-format orientation
Ported fully to the Max5 SDK (5.0.8+ required)
The support for various coordinate systems seems to be a follow up to discussions we did at s spatialisation workshop at GMEA in Albi in January, as well as the paper presented at SMC2009, available here. I am currently checking them out so that we can get the new beta release incorporated into the master branch of Jamoma.
The documentation for the objects is excellent. One minor detail has caught my interest: The orange color used in some of the help patches. I am sure that I have seen that color somewhere before…
Update: Jasch reports back that there’s no relationship to M4L.
We would like to announce the 2nd International Symposium on Ambisonics and Spherical Acoustics, which will be organized by IRCAM, together with LIMSI-CNRS and France Telecom/Orange Labs. The Symposium will be held to provide an intensive exchange between industrial and academic researchers working in various research areas on spherical acoustics.
The field of spatial sound reproduction is interdisciplinary by nature and closely related to a number of computer science and engineering areas such as acoustics, mathematics, signal processing, and perception. The symposium focuses on discussing the various problems and solutions concerning the capture, analysis, and re-synthesis of sound fields applying spherical acoustics; for example higher-order Ambisonics (HOA), and spherical microphone arrays.
The symposium is going to be divided into oral presentations (keynotes and submissions), poster sessions, and two demonstration sessions (artistic and technical). This symposium will benefit from a hemi-spherical loudspeaker array that will be installed for the duration of the symposium, which will be held in the Espace de projection, the variable acoustics performance hall of Ircam.
Original contributions are encouraged in, but not limited to, the following topics:
General considerations on spherical acoustics theory
Ambisonic for sound scene reproduction and virtual acoustic environments
Spherical microphone array systems and signal processing
Capture and analysis of radiation patterns
Spherical acoustic holography
Synthesis of directional and focused sound sources
Spherical loudspeaker array systems and signal processing
Theoretical considerations on comparative subjective and objective studies
Standardization, exchange, implementation and hardware issues
Submission
Submissions will be judged based on extended abstracts (1000 words). Procedures to submit papers, posters, and demo sessions are detailed at the symposium website http://ambisonics10.ircam.fr. Final papers must be camera-ready conforming to the format specified on the submission website.
Several excellent papers will be selected for collective submission to Acta Acustica united with Acustica. These papers will be expanded versions of the presented works, and will go through the standard peer review process.
The official language of the symposium is English.
Important dates
Extended abstract (1000 words) submission due: January 8th, 2010
Papers, Notification of acceptance due: February 19th, 2010
Camera-ready copy (full paper): March 12, 2010
Registration opens: March 22, 2010
Late Registration: April 12, 2010
Submission of Audio Material for Testing: April 19, 2010
Symposium Dates: May 6-7, 2010
Symposium Chairs
General Co-Chairs
Markus Noisternig (IRCAM – UMRCNRS)
Brian FG Katz (LIMSI – CNRS)
Rozenn Nicol (France Telecom – Orange Labs)
Technical Program Co-Chairs
Nicolas Misdariis (IRCAM – UMRCNRS)
Olivier Warusfel (IRCAM)
Jamoma is an open-source project for structured programming in Max/MSP/Jitter and is based on modular principles that allow the reuse of functionalities whereas all parameters remain customizable to specific needs.
Jamoma has been developed for more than five years and has been used for teaching and research within science and the arts. It has provided a performance framework for composition, audio/visual performances, theatre and installation gallery settings. It has been also used for scientific research in the fields of psychoacoustics, music perception and cognition, machine learning, human computer interaction and medical research.
Features include:
A large and peer-reviewed library of modules for audio and video processing, sensor integration, cue management, mapping, and exchange of data with other environments
Extensive set of abstractions that facilitates everyday work with Max/MSP
Specialized sets of modules for work on spatial sound rendering, including support for advanced spatialization techniques such as Ambisonics, DBAP, ViMiC and VBAP
Modules for work on music-related movement analysis
Powerful underlying control structures that handle communication across modules
Strong emphasis on interoperability
Native OSC support, thus making it easy to access and manipulate processes via external devices and interfaces
Comprehensive documentation through maxhelp-files, reference pages and growing number of online tutorials
Jamoma is easily extendable and customizable
Jamoma 0.5 was a major effort. Originally it was envisioned as a port from Max 4 to Max 5. However, we did a lot more than that, and significantly overhauled major portions of Jamoma to dramatically improve performance, stability (particularly on Windows), and ease of use. We have also improved the documentation, distribution, and organization of Jamoma.
Here are some resources to get started with Jamoma 0.5:
Requirements: Jamoma 0.5 requires Max 5.0.8, and works on OSX 10.4 or later (Intel) and Windows XP or later
Jamoma is licensed as GNULGPL. Jamoma is an open source development initiative with more than 20 contributors. Development is supported by BEK – Bergen Center for Electronic Arts, CIRMMT – the Centre for Interdisciplinary Research in Music Media and Technology, McGill University, Electrotap, GMEA – Centre National de Creation Musicale d’Albi-Tarn and University of Oslo with additional financial support from a wide range of institutions and funding bodies. Further details can be found here.
This week two days were spent participating at a staff seminar at the Bergen National Academy of the Arts on the subject of research in the arts. As part of the seminar we visited Vestlandets kunstindustrimuseum to see the exhibition “Vær så god sitt” (“Please be seated”), an exhibition of current Norwegian funiture design.
The exhibition is well worth seeing, but the part that really caught my attention was the design of the exhibition space itself. Along the walls slowly drifting video patterns became a moving wall paper, a visual analogy of the ambient music of Brian Eno, a visual musique d’ameublement. The video never forced itself on me, but the slow drift quietly invited a slowing down, creating a space for thinking. The video was accompanied by occasional ambient sound, blending in with the background noise from the streets outside the building.
Chairs and sofas in a museum could easily end up being objects in a contextual void, detatched from their usual environments. Sound and video invited me as visiter to imagine the spaces where the funiture might belong, while also hinting at how sound and video might be embedded in interior design and architecture.
I’ve gotten myself a patchbay for the studio. This will mainly make it easier to swicth between the different sets of loudspeakers I have: A pair of studio monitors, 7 speakers for spatialisation, and a hemisphere speaker. Initial patchbay configuration below.
Another cool addition is that I have finally figured out how to set up internet tethering on the iPhone, so that I can be online while at the studio.
The Multiplayer by Marlon Schumacher is a MaxMSP-based application for general purpose multichannel file playback and ambisonic decoding, developed as part of the OMprisma Library for IRCAM’s OpenMusic 6.1.
It is implemented as a set of modules complying with the Jamoma framework.
Multiplayer 0.1b7 is now available as part of Ircam Forum.
Verison 4.1.5 of the Ircam Spat Max/MSP library for live spatialisation was released a week ago, as part of the Ircam Forum Studio software, available on a subscription basis. After some initial tests, I am really impressed: This is Spat done right.
Almost all functionalities have been turned into Max externals and the interface totally redesigned with easy to use access to complex sets of spatialisation techniques. And of course, it’s a bit thrilling to see DBAP included among the spatialisation techniques available. Here’s a list of changes in 4.x according to the Spat documentation itself:
Version 4.x of Spat~ is a major turning point in the software development (the code was completely rewritten) to provide flexibility in the program’s architecture and offer effective user interfaces. The main functions and the characteristics of the version 4.x are :
a large number of objects can be modulated and scaled (up to 64 inputs/outputs)
intuitive control interfaces facilitate the configuration of the loudspeakers setup and
the manipulation of the sound sources
IrcamVerb: a new multichannel reverb (Stereo, 5.1, etc.) with the habitual control
interface
a large family of panning techniques (quad, 5.0, octophony, simulation of XY, MS or
AB recordings, VBAP 2D or 3D, DBAP, Ambisonic B-format and higher orders, 3D
binaural with compensation of close field effects, surround LRS/LCRS, etc.)
improvement of the reverb quality (reverb tail is less “metallic”)
complete documentation of the externals and patches
takes full advantage of the new features of Max 5 (attributes, management of the
presets with pattr)
modules for coding/decoding that manage different formats (ambisonic, transaural,
surround, UHJ, etc.)
a better coherence in the architecture and the management of messages
All papers from past Sound and Music Computing Conferences are now available in one place, with search facility to search authors, titles, and the text of the abstracts.
Max4Live was released recently. I haven’t spent much time checking it out yet, but just from opening Live and and looking at the Max4Live tutorials, the following quote seems acutely relevant:
The great benefit [of tools like Cubase] is that they remove the issue of skill, and replace it with the issue of judgement. With Cubase or [an imaging program like] Photoshop, anybody can actually do anything, and you can make stuff that sounds very much like stuff you’d hear on the radio, or looks very much like anything you see in magazines.
So the question becomes not whether you can do it or not, because any drudge can do it if they’re prepared to sit in front of the computer for a few days; the question then is: of all the things you can now do, which do you choose to do?
This is a whole issue for which there are not manuals! And nobody has actually discussed that side of it – they’re all so thrilled by the thought that I can do anything! The real interesting question of ‘So what do I do?’ isn’t being addressed.
This is why I say I’d rather talk about abstract things because that’s the kind of question that I want to talk about. We’re now in a sea of unmoored judgements. Of people thinking that the judgement part is the bonus, the bit you put on at the end, once you’ve got a good beat… I think that’s got to be right at the beginning of the process; it’s got to steer you all the way through.