September 8, 2010
I am currently attending DAFx-10, the 13th International Conference on Digital Audio Effects, hosted by IEM in Graz. Yesterday I presented the paper The Jamoma Audio Graph Layer, written by Tim Place, myself and Nils Peters. The paper is now available for download, and here is the abstract:
Jamoma Audio Graph is a framework for creating graph structures in which unit generators are connected together to process dynamic multi-channel audio in real-time. These graph structures are particularly well-suited to spatial audio contexts demanding large numbers of audio channels, such as Higher Order Ambisonics, Wave Field Synthesis and microphone arrays for beamforming. This framework forms part of the Jamoma layered architecture for interactive systems, with current implementations of Jamoma Audio Graph targeting the Max/MSP, PureData, Ruby, and AudioUnit environments.
One of the important features of the framework is how it enables work on multichannel audio signals using single patch chords in patching environments such as Max and PureData. Patches for spatialisation in MaxMSP typically end up being rather elaborate, as illustrated above. In contrast, the patch below illustrates how Jamoma AudioGraph is able to pass multiple audio channels around using one patch chord only.
September 9, 2010
The call for NIME 2011, 11th International Conference on New Interfaces for Musical, is out. The call is also asking for contributions towards an exhibition on sonic interaction design that will be curated in collaboration with the EU COST IC0601 Action on Sonic Interaction Design (SID).
The exhibition will be produced by BEK, curated by me and Frauke Behrendt, and will take place at the Norwegian Museum of Science, Technology and Medicine. For the exhibition we are looking for works using sonic interaction within arts, music and design as well as examples of sonification for research and artistic purposes.
11th International Conference on New Interfaces for Musical Expression (NIME 2011)
30 May – 1 June 2011, Oslo, Norway
http://www.nime2011.org
We invite you to be part of the International Conference on New Interfaces for Musical Expression. The core purpose of the NIME conference is to present the latest results in design, development, performance and analysis of/for/with new interfaces/instruments for musical use. In 2011 we will put an extra emphasis on performance aspects related to NIME, something which will also be addressed in a symposium, workshops and master classes in the days leading up to the conference.
We invite for the following types of submissions (see below for details):
- Paper (oral/poster/demo)
- Performance
- Performance Plus Paper
- Installation/exhibition
- Workshop
IMPORTANT DATES
- SID exhibition proposals: 5 November 2010 (22:00 CET)
- Paper/performance/installation/workshop submission: 31 January 2011 (22:00 CET)
- Review notification: 18 March 2011
- Final paper deadline: 26 April 2011
TOPICS
- Novel controllers and interfaces for musical expression
- Novel musical instruments
- Augmented/hyper instruments
- Novel controllers for collaborative performance
- Interfaces for dance and physical expression
- Interactive game music
- Robotic music
- Interactive sound and multimedia installations
- Interactive sonification
- Sensor and actuator technologies
- Haptic and force feedback devices
- Interface protocols and data formats
- Motion, gesture and music
- Perceptual and cognitive issues
- Interactivity design and software tools
- Sonic interaction design
- NIME intersecting with game design
- Musical mapping strategies
- Performance analysis
- Performance rendering and generative algorithms
- Machine learning in performance systems
- Experiences with novel interfaces in live performance and composition
- Surveys of past work and stimulating ideas for future research
- Historical studies in twentieth-century instrument design
- Experiences with novel interfaces in education and entertainment
- Reports on student projects in the framework of NIME related courses
- Artistic, cultural, and social impact of NIME technology
- Biological and bio-inspired systems
- Mobile music technologies
- Musical human-computer interaction
- Multimodal expressive interfaces
- Practice-based research approaches/methodologies/criticism
CALL FOR PAPERS
We welcome submissions on all above mentioned (and other) topics related to development and artistic use of new interfaces for musical expression. There are three different paper submission categories:
- Full paper (up to 6 pages in proceedings, longer oral presentation, optional demo)
- Short paper/poster (up to 4 pages in proceedings, shorter oral presentation or poster, optional demo)
- Demonstration (up to 2 pages in proceedings)
All accepted papers will be published in the conference proceedings.
Paper submission information
http://www.nime2011.org/submission/#papers
CALL FOR PERFORMANCES
We welcome submission of pieces for three different types of performance venues:
- Concert hall performance
- Club performance
- Foyer “stunt” performance
Any type of NIME performance pieces are welcome, but we would particularly like to encourage the use of motion capture techniques in performance. For this we can make available several different types of motion capture systems (Qualisys, XSens, Optitrack, Mega). Network pieces and mobile music pieces are also encouraged. Within reasonable limits, we may be able to provide musicians to perform pieces.
Performance submission information
http://www.nime2011.org/submission/#performances
CALL FOR PERFORMANCE PLUS PAPER
To support more cross-disciplinary work between scientific and artistic research, we highly encourage submission of performance pieces related to papers. Here the scientific presentation may be the basis for the artistic presentation, or vice versa.
Submissions within this category will have to be done for both the piece and the paper, with a clear note that paper and piece belongs together. Evaluation will be done on the combined quality of both submissions.
CALL FOR INSTALLATIONS
In connection with NIME 2011 an exhibition on sonic interaction design will be curated in collaboration with the EU COST IC0601 Action on Sonic Interaction Design (SID). For the exhibition we are looking for works using sonic interaction within arts, music and design as well as examples of sonification for research and artistic purposes. The exhibition will take place at the Norwegian Museum of Science, Technology and Medicine and run for three months over the summer 2011. We also aim to include works in public spaces to be presented at various locations in Oslo (possibly) for a shorter duration in parallel with NIME.
This is a curated exhibition, and there is a possibility for funding and assistance to be provided for selected artists. Note that there is an early deadline for submissions within this category (5 November). Further enquiries concerning the SID exhibition should be addressed to the curators (exhibition@nime2011.org).
In addition to the exhibition, we also call for installations to be presented during the NIME conference only. These may be foyer location installations or room-based installations.
Installation submission information
http://www.nime2011.org/submission/#installations
CALL FOR WORKSHOPS
We call for short (3 hours) or long (6 hours) workshops and tutorials. These can be targeted towards specialist techniques, platforms, hardware, software or pedagogical topics for the advancement of fellow NIME-ers and people with experience related to the topic. They can also be targeted towards visitors to the NIME community, novices/newbies, interested student participants, people from other fields, and members of the public getting to know the potential of NIME.
Tutorial proposers should clearly indicate the audience and assumed knowledge of their intended participants to help us market to the appropriate audience. Workshops and tutorials can relate to, but are not limited to, the topics of the conference. This is a good opportunity to explore a specialized interest or interdisciplinary topic in depth with greater time for discourse, debate, collaboration.
Admission to workshops and tutorials will be charged separately from the main conference. Proposer(s) are responsible for publishing any workshop proceedings (if desired) and should engage in the promotion of their event amongst own networks. Workshops may be cancelled or combined if there is inadequate participation.
Workshop submission information
http://www.nime2011.org/submission/#workshops
If you have any inquiries, please email us at post@nime2011.org. Please feel free to forward this message to others.
On behalf of the NIME 2011 committee,
Alexander Refsum Jensenius (University of Oslo)
Kjell Tore Innervik (Norwegian Academy of Music)
September 14, 2010
Karen Kipphoff and Trond Lossius collaborates on a videowork that is to be presented at the ICCI 360 panoramic video festival in Plymouth (UK) September 2010. The work is titled “At the Zoo”. It’s an anthropomorphic composition that uses pattern and montaged, abstract sound based on the French tradition of acousmatic music.
September 16, 2010
Journals on sound and auditory culture seems to be popping up everywhere these days, Here’s yet another one.
“Interference” is a peer-reviewed open-access journal providing a forum on the role of sound within cultural practices, and a trans-disciplinary platform for the presentation of research and practice in areas such as acoustic ecology, sensory anthropology, sonic arts, musicology, technology studies and philosophy. It is funded by the Graduate School of Creative Arts and Media (Gradcam), the Centre for Telecommunications Research (CTVR) in Trinity College Dublin, and Dublin Institute of Technology’s School of Art, Design and Printing.
They have just announced the first call for papers detailed below and are accepting abstracts from interested parties.
Interference: A Journal of Audio Culture, are pleased to announce a call for papers for the inaugural issue “An Ear Alone is Not a Being”: Embodied Mediations in Audio Culture, to be launched in the Spring of 2011.
To what extent are acoustic practices embodied? How does physical embodiment shape auditory cognition? What role do processes such as biofeedback and genetic algorithms play in contemporary musical practices? What kinds of idealised listening subjects are encoded in acoustic algorithms such as codecs, head-related transfer functions or binaural recording specifications? How are psychoacoustic effects deployed for and against the body? How might we speak about listening practices that extend beyond the ear to sensorial or haptic accounts of audition?
The inaugural issue of Interference investigates the mediative role of the body in sonic practices. Embodied mediation presumes a reciprocal process: we explore how listening experiences and acoustic practices are shaped by corporeality, but we also attend to the many ways in which these processes work upon that body, through psychophysical affect and the representation and encoding of embodied subjects in acoustic performances, technologies, and cultural artefacts. Submissions may take the form of academic articles or statements of research and practice.
Proposals for this issue of Interference might address, but not exclusively, some of the following issues and points of discussion:
- Research in Embodied Music Cognition
- Phenomenology of Sound
- Biofeedback: the role played by corporeality in sonic arts, musical practices, performance and design
- Sonic Dominance: the use of acoustic properties as affective tools
- Sonic Mediations: Exploring the mediative role of the body between cognitive response and acoustic environment. Exploring the relationship between the body and tools for acoustic composition and performance.
- Encoding Bodies: An exploration of how the body might be represented or encoded in practices as diverse as instrument design, networked performances, psychoacoustic algorithms, prosthetic devices etc.
- Haptic or intersensory listening practices
Interference balances its content between academic and practice based research and therefore accepts proposals for both academic papers and accounts of practice based research.
Deadline for Abstracts October 31st 2010 to
editor@interferencejournal.com
Deadline for Final Papers January 15th 2011 to
editor@interferencejournal.com
For more information, and submission guidelines please see
www.interferencejournal.com
or contact
editor@interferencejournal.com
September 19, 2010
“When I decided to keep all my sounds it was to build a kind of – as Borges said – a library of Babel that makes everything exist like a pyramid, like a memory.”
Ever since I first heard La Ville by Perre Henry, his archive of tapes has been lurking at the back of my head. According to others that have attended concerts at his house the rooms are filled with tapes from floor to ceiling.
In spite of having a strong sympathy with his way of working, collecting sounds into an ever-growing archive, I have never found myself doing the same. Instead most of my works have emerged from processing of either synthetic sound or relatively short samples. And I have never gotten into the habbit of regularly doing field recordings. I guess there are a number of resons why, and some of them might be worth examining to better understand and challenge the current driving mechanisms of my own processes.
This summer, checking out Aperture, an asset management and processing program for photos, I realised that I have never found a sustainable strategy for managing sound files and recordings, and this might be one very practical reason for my limited use of recordings.
Aperture is not only a container for photos with added processing possibilities. What caught my attention was the use of metadata and image analysis to offer advanced ways of manouvering the dataset of photos. Photos can be organized by folders, projects, groups, tags, faces and geolocation. Some other examples of image database searches based on automated analysis are TinEye and Multicolr.
“It is like having a library full of the books that you treasure. It is also a collection of all sorts of things – noises, voices, animals, instruments… and all this was planned a long time ago, a long time before sampling appeared. I decided to keep my sounds. There are sounds, of course, which don␣t satisfy me anymore, that have to be thrown away, that have to be sacrificed, but apart from that, sounds are a part of what I like around me. They are my family-circle.”
In responce to a request at the microsound mailing list, I got several suggestions for apps to check out.
Snapper
Snapper is handy for previewing sound files directly in Finder, with a generous trial period of 100 days. Easy to get addicted to, and I’ll probably get a lisence, but it doesn’t provide any functionality beyond Finder search tools for navigating libraries of sounds.
Sample manager, Sound Grinder and Sound Grinder Pro are more about batch processing of audio than asset management, and not what I am currently looking for.
The strongest current contenders are AudioFinder, Library Monkey and the more expensive brother Library Monkey Pro. A more expensive solution mainly for the film industry is SoundMiner, but that’s out of my league.
Library Monkey
Library Monkey offers possibilities of creating libraries, sets and bins. It is also have a rich amount of tags and fiels that can be set. Sounds can be previewed (or rather prelistened) or opened in other audio programs for further reviewing and processing. In general it doesn’t seem to provide any form of audio analysis, not even simple stuff such as reporting peak value. Waveform display and editing seems to be available in Pro only. The Pro version also supports AudioUnit and VST processing. Reading posts at various forums the developers of Library Monkey emphasis that one of the strengths of the application is the ability to import sample libraries from CD with all indexing information intact.
AudioFinder
AudioFinder seems to use the screen more efficient, and the waveform display is welcome as compared to Library Monkey. Multichannel audio files are displayed as one channel only, and for long recordings the waveform do not seem to be displayed at all. AudioFinder supports AudioUnit processing. It is cheaper than Library Monkey (70 USD as compared to 129 USD or 399 USD for the pro version).
Tim Preble has provided an excellent overview of metadata support in a number of relevant applications.
Initially I’ll go with AudioFinder and see how that works for me. I still believe that there is a lot of potential for further development of this kind of applications, they are nowhere near the advanced analysis and search capabilities of Aperture, their image sibblinging. It would e.g. be a good thing to have geotags and map display for field recordings. A paper by Diemo Schwarz and Norbert Schnell entitled Sound Search by Content-Based Navigation in Large Databases presented last year at the SMC conference in Porto offers some suggestions for added search possibilities.
“First there is the initial creative act – which is to choose one sound rather than another. It is a sort of emotional and aesthetic intention, one chooses this sound which will later become longer or will be transformed. In fact, choices are related to harmony, harmonisation and counterpoint as well as orchestration. To me, it is close to what I learnt during my classical music training. (…) My own intention in composition is to orchestrate the sounds in relation to each other, it is being a composer, it is composing – there you have it.”
Here I believe that Henry points towards some of the other reasons why my use of field recordings have been limited so far. I often find them to be so rich in spectral content and dense in number of layers or sound sources present, that they are already “full” and poses challenges in terms of how I can compose them, in time as well as space. But this is something that I would like to challenge myself to re-examine and investigate further, possibly finding approaches that could circumvent the limitations I have found in most of the recordings I have done up until now.
I guess part of this question will also be how to find gear for field recordings that is light-weight, provides high quality sound recordings, offers possibilities for directional recording (zooming in at the desired source, leaving others out) and functional in windy conditions. This might be asking for a lot, but would be required in order to add it to the geek rug-sac that already contains my laptop and more, so that it could be with me at all times.
All quotes from John Dack (1999): Pierre Henry’s continuing journey. Diffusion vol. 7
September 19, 2010
“‘panoptICONS’ addresses the fact that you are constantly being watched by surveillance cameras in city centres. The surveillance camera seems to have become a real pest that feeds on our privacy. To represent this, camera birds – city birds with cameras instead of heads – were placed throughout the city centre of Utrecht where they feed on our presence.
In addition, a camera bird in captivity was displayed to show the feeding process and to make the everyday breach of our privacy more personal and tangible.”
artists: Helden (Thomas voor ’t Hekke and Bas van Oerle)
location: Utrecht, Netherlands
Read more
September 21, 2010
There is no such a thing as a copy. In the world of digitalized images, we are dealing only with originals – only with original presentations of the absent, invisible digital original. The exhibition makes copying reversible: it transforms a copy into an original.
Boris Groys
One could of course argue that this is not the real thing, but then – please, anybody – show me this real thing.
Hito Steyerl