NewsWorksSoftwareTextBioContact
background image

Blog archive for April 2009

File attachments

April 1, 2009

I’m stranded at Schipol for 7 hours today, but found a power connection, and try to spend the time in a sensible way:

I just added support for file attachments on the site. I’ve added bibliography in the text section, with links to the papers, and also added download links in the software section.

Sensuous Knowledge 6 – Call for presentations

April 5, 2009

An International Working Conference on Fundamental Problems of Artistic Research

Solstrand, Norway 23 – 25 September 2009

We would like to invite practitioners from all art disciplines, including design and the performing arts, to submit proposals for presentations at the sixth Sensuous Knowledge Conference.

Under the heading “Reflection, Relevance, Responsibility” this year’s conference will focus upon three important aspects of artistic research. Often critical reflection is put forward as one of the differences between artistic practice and artistic research, and promoted as an alternative to the scientific conventions of thinking and writing. However, very little investigation has been done by us on the notion of reflection – what is it expected to contain and convey? With the rapid and unforeseen global changes to our existence, surely we have to look up from our own, potentially narcissistic reflection where we only see ourselves mirrored.

In the previous Sensuous Knowledge conferences the aim has been to develop ways of talking about, analysing and evaluating examples of artistic research, and to consider the special character of cognition related to artistic creativity. This year we want to explore these questions further and, through your contributions, shift the debates into a more urgent context. How do we as artistic researchers, through critical reflection, address and engage with what we see around us? How can artistic research make a meaningful and relevant contribution outside of itself? And how can it acknowledge the responsibility of art and research towards the world outside the academy?

The annual international Sensuous Knowledge conferences, arranged since 2003 by Bergen National Academy of the Arts (Kunsthøgskolen i Bergen, KHiB) and supported by The Research Council of Norway, are known for their distinctive format: About 70 artists and designers involved in higher art and design education meet for three days in the spectacular surroundings at Solstrand, close to Bergen.

The programme offers significant keynote speeches, and first and foremost time for group discussions of selected examples of artistic research projects. The presentations are made by some of the group members themselves – experienced researchers as well as newcomers to the field. One of the most significant traits of the conferences is that the participants stay together in their designated groups during all three days, thereby enhancing the discussions to a deeper level of engagement.

Please send us one A4 page (two maximum) describing

  1. the theme or research question of your proposal and the main points of the presentation,
  2. the issues that the presentation intends to raise in the discussion,
  3. the character of the presentation, especially what kind of visual or acoustic material you may want to present,
  4. your professional practice background in relation to your proposal.

Please note that 30 minutes will be the maximum available time for each presentation.

Proposals should be sent before 13 May to
sensuous.knowledge@khib.no

You will be informed whether your proposal has been accepted by 1 June.

A general invitation to participate in the conference with details about registration, conference fee, etc. will be distributed around the beginning of April. A preliminary programme for the conference will be launched in the first part of June.

See reports from the earlier conferences at
www.sensuousknowledge.org

Information about Solstrand Hotel & Bad:
www.solstrand.com

Bergen National Academy of the Arts (KHiB) is one of the two independent institutions of higher education in the visual arts and design in Norway. KHiB has three departments: Dept of Fine Art; Dept of Design and Dept of Specialised Art. Students can take a three-year Bachelor degree and a two-year Master degree in the following areas: visual art; interior architecture; furniture design; spatial design; visual communications; photography; printmaking; ceramics and textiles. A part time study in creative curating is also offered. Research fellows at PhD level are included in the academic staff. KHiB has 300 students and a staff of 100, full-time and part-time.

www.khib.no

New blog feature: Image overlay

April 13, 2009

I’ve just implemented image overlay in the web site using the Lightbox JS by Lokesh Dhakar. Of course I was so eager to get it going that I didn’t have the time to read his page properly and realize that he has an even slicker version 2 of the javascript.

Updating seems mainly to be a question of substituting the javascript itself. The html tags seems to be the same.

Here’s an example:

Ekkofisk_3

AudioMulch 2 to be released May 1st

April 13, 2009

The release date for AudioMulch 2 has been announced to be May 1 2009.

Am2a10_mac

UPDATE: Release date has been rescheduled to June 5th. May 1st there will be a beta version available for all registered users.

Paper accepted for ICMC 2009

April 13, 2009

We have just been notified that a paper on DBAP – Distance-based amplitude panning written by myself, Pascal Baltazar and Theo de la Hogue has been accepted as a short paper for ICMC 2009.

The DBAP algorithm is available as part of Jamoma. Below is a screen shot from one of the help files.

See you in Montreal, Nils!

Dbap

Mail server down, temporary e-mail address

April 15, 2009

UPDATE: Fixed – mail is up and working again.

We (read: I) did a security update to the BEK server today, and bumped into some problems that Apple has been aware of for a month, but have not fixed. Neither have they withdrawn the server security update. Not impressed!

We are working on solving it, but it will take some time. In the mean time, I have a temporary e-mail address at Gmail.

Stupid me! I knew about the problem, but took it for granted that it had been solved by now…

Call for proposals: Sonic Interaction Design book

April 19, 2009

Abstract Deadline: 30 May 2009
Abstract Length: 500 words

We invite your contribution to a book on Sonic Interaction design, which discusses new perspectives on interactive sound and emergent research topics. The book will be published by MIT Press in spring 2011; is supported by the COST ITC Action on Sonic Interaction Design IC0601; and is edited by Karmen Franinovic and Stefania Serafin.

The submitted texts should contain case studies representative of an application area, such as mobile technology, gaming, rehabilitation or wellness. Your contribution may be focused on one single project (2000 words), or it may give an overview of a whole application/thematic area (up to 4000 words). Projects need to be contextualized within the respective domains, such as:

  • Mobile technology
  • Rehabilitation and medical applications
  • Product sound design
  • Artistic installations
  • Game interfaces
  • Toys
  • Architecture
  • Music
  • etc..

Please submit your abstracts to sts@imi.aau.dk by 30 May 2009.

Schedule
Deadline for Abstracts: 30 May 2009
Deadline for Papers: 30 August 2009
Notification of Accepted Papers: 30 September 2009
Final Draft: 1 November 2009

Best Regards,
Karmen and Stefania

[STSM] Two week research visit to Ircam

April 19, 2009

I arrived in Paris today for a two week visit to Ircam. I have been granted a short term scientific mission (STSM) within The COST IC0601 Action on Sonic Interaction Design (SID), and will be working with the real time musical interactions team at Ircam; Frederic Bevilacqua, Diemo Schwarz and Norbert Schnell.

We kick-started the collaboration with a workshop at BEK in March. The workshop was led by Diemo Schwarz, and offered an introduction to FTM , Gabor, MnM, Suivi and CataRT.

The plan is to investigate how FTM and Jamoma can be integrated, or more specifically, how FTM can be used to develop solutions for advanced control of Jamoma modules.

I have been offered to stay at the Didascalie studios while I’m here, which greatly helps reducing living expenses. I also expect to be spending some time with members of Didascalie catching up on what they are working on, and in particular their work within the Virage plattform.

There is quite a bit of overlap in the aims of development going on within Virage and my own ambitions for the stay at Ircam, and I think there might be benefits from being able to discuss the two in parallel.

The SID project encourage STSM participants to blog on their activities during the visit, and I’ll do my best.

[STSM] Advanced real-time control of parameters by integration of Jamoma and FTM in Max

April 19, 2009

(This is the project description for the short-term scientific mission.)

Introduction

Real-time technologies in the arts

The development of real-time technology has opened new possibilities for artistic expression, enabling live generation of and interaction with media. The processing in real-time of media and live input, often combined with possibilities for physical computing (O’Sullivan & Igoe, 2004), has become an integrated part of a variety of contemporary artistic practises such as works for stage, live music performances using new instruments for musical expression, interactive and generative installations and sound art. A major challenge in this kind of works is how to develop control systems that maintain access to a rich set of parameters while remaining manageable in a live performance setting.

Accessing complex sets of parameters in real-time through a structured approach

Max/MSP/Jitter is one of several programming environments for real time processing of media. According to one of its creators “Max/MSP does not enforce readability, consistency, or efficiency on its users. There are no real standards for interoperability at the level of the patcher…” (Zicarelli, 2002).

Jamoma attempts to address this issue by providing a framework for modular development in Max with a structured API for interfacing with modules (Place & Lossius, 2006). Jamoma modules communicate using the Open Sound Control protocol (Wright, 2002), extended through an object-oriented approach to OSC nodes, conceiving them as having properties and methods (Place, Lossius, Jensenius, Peters, & Baltazar, 2008). The process of assigning additional properties to parameters defining their behaviour increase the abilities for continuous transformation and shaping of the artistic material (Place, Lossius, Jensenius, & Peters, 2008). The OSC namespace implementation in Jamoma also provides possibilities of querying the system for the namespace of available nodes, as well as retrieving information on current values of nodes and node properties, along somewhat similar lines as suggested by Jazzmutant (2007). This way Jamoma partly offers solutions to a fundamental question of how to maintain access to and control of complex sets of parameters and data in real-time systems.

Apart from being used for artistic purposes, Jamoma is also used for researhc and prototyping of protocols for capturing and communication of data streams, e.g. gestural data using GDIF – Gestural Description Interchange Format (Jensenius, Kvifte, & Godøy, 2006), (Nymoen, 2008) and spatial audio information according to SpatDIF – Spatial Sound Description Interchange Format (Peters, 2008).

Controlling complex sets of parameters in real-time environments

The Jamoma API offers simple access to all parameters of all modules, but relatively few modules so far takes advantage of this for advanced controlling purposes. The exceptions are a text-based cue list system, a number of modules for one-to-one mappings between parameter values, and a series of modules for work on SDIF – Sound Description Interchange Format data (Nymoen, 2008). Development of further solutions for control of modules is ongoing within the French research platform Virage.

FTM is a shared library and a set of modules extending the signal and message data flow paradigm of Max permitting the representation and processing of complex data structures such as matrices, sequences or dictionaries as well as tuples, MIDI events or score elements (Schnell, Schwarz, Bevilacqua, & Muller, 2005). FTM forms the basis for the MnM toolbox, dedicated to mapping between gesture and sound, and more generally to statistical and machine learning methods (Bevilacqua, Müller, & Schnell, 2005), as well as Gabor, a unified framework for a number of advanced audio processing techniques (Schnell & Schwarz, 2005).

Objectives

FTM and accompanying libraries are developed at IRCAM – Institut de Recherche et Coordination Acoustique/Musique. The objective of the proposed STSM to IRCAM will be to investigate possibilities for advanced control of complex systems for real-time processing of media by integrating the use of Jamoma and FTM libraries in Max:

  • The first goal will be to develop a more firm understanding of how the FTM and MnM libraries work and how they might be used for advanced control of Jamoma modules.
  • The translation of OSC data into FTM-compatible objects will be investigated. Of particular relevance are snapshots of module states and time-based streams of data. Potential FTM-based representations are vectors and matrixes of floating-point values, break point functions, score objects and scores of time-tagged matrixes and vectors. This will enable capturing of instant states of a Jamoma system as well as sequences of events over time.
  • From this methods for mapping of data will be investigated. In particular matrix-based representations of data will be used to investigate linear many-to-many mappings and mappings based on Principle Component Analysis.
  • Sequences of time-tagged recordings of data can be considered objects and further processed, e.g. in order to morph between recorded sets of gestures or for live interaction with predefined or recorded sequences of parameters over time.
  • Finally I hope to achieve a more firm understanding of gesture and score following techniques as implemented in MnM and Suivi, as a basis for future research into how this can be used to control systems of Jamoma modules, e.g. by having gestures or score following algorithms triggering complex states and events.

All solutions developed are to be implemented as Jamoma modules and will be distributed according to a GNU LGPL licence.

Biography

Bevilacqua, F., Müller, R., & Schnell, N. (2005). MnM: A Max/MSP mapping toolbox. Proceedings of the 2005 Conference on New Interfaces for Musical Expression.

Jazzmutant. (2007). Extension and Enhancement of the OSC Protocol. Draft 25 July. Jazzmutant.
Jensenius, A. R., Kvifte, T., & Godøy, R. I. (2006). Towards a gesture description interchange format. Proceedings of New Interfaces for Musical Expression, NIME 06 (pp. 176–179). Paris: IRCAM – Centre Pompidou, Paris, France.

Nymoen, K. (2008). A setup for synchronizing GDIF data using SDIF-files and FTM for Max. Report on Short Term Scientific Mission. Action: E0601 – Sonic Interaction Design. Musical Gestures Group, Department of Musicology. Oslo: University of Oslo.

O’Sullivan, D., & Igoe, T. (2004). Physical computing. Sensing the physical world with computers. Boston: Thompson Course Technology.

Peters, N. (2008). Proposing SpatDIF – The Spatial Sound Description Interchange Format. Proceedings of the International Computer Music Conference. Belfast: The International Computer Music Association.

Place, T., & Lossius, T. (2006). Jamoma: A modular standard for structuring patches in Max. Proceeding of the International Computer Music Conference 2006. The International Computer Music Association.

Place, T., Lossius, T., Jensenius, A. R., & Peters, N. (2008). Flexible control of composite parameters in Max/MSP. Proceeding of the International Computer Music Conference. The International Computer Music Association.

Place, T., Lossius, T., Jensenius, A. R., Peters, N., & Baltazar, P. (2008). Addressing classes by differentiating values and properties in OSC. Proceeding of the 8th International Conference on new Instruments for Musical Expression.

Schnell, N., & Schwarz, D. (2005). Gabor, multi-representation real-time analysis/synthesis. Proceedings of the 8th International Conference on Digital Audio Effects (DAFx’05). Madrid: Universidad Politécnica de Madrid.

Schnell, N., Schwarz, D., Bevilacqua, & Muller, R. (2005). FTM – complex data structures in Max. Proceedings of the 2005 International Computer Music Conference. The International Computer Music Association.

Schwarz, D., Beller, G., Verbrugghe, B., & Britton, S. (2006). Real-Time Corpus-Based Concatenative Synthesis with CataRT. Proceedings of the 9th Int. Conference on Digital Audio Effects (DAFx-06). Montreal, Canada.

Wright, M. (2002). The Open Sound Control 1.0 Specification. version 1.0. Retrieved 11 30, 2008 from http://opensoundcontrol.org/spec-1_0

Zicarelli, D. (2002). How I learned to love a program that does nothing. Computer Music Journal , 26 (4), pp. 44-51.

Call for Ideas for the Practical Session at the next WG3 meeting in Graz

April 20, 2009

In May the Workgroup 3 of the COST-SID Action will meet for two days at IEM in Graz. We will do a practical session to experiment with the Tracked Hand-held Speaker (THS). Below is the call for ideas, sent out by Gerhard Eckel.

 


 

Dear WG3 members,

in the 6th WG3 meeting in York it was decided to include in the next WG3 meeting in Graz (May 11/12) a practical session to experiment with a setup I suggested and will describe in this email (see also the extract from the meeting minutes at the end of this email).

Tracked Hand-held Speaker

Ths_side

The Tracked Hand-held Speaker (THS) is a simple setup proposed by Gerhard Eckel for collective experimentation in the practical session of the 7th SIG-WG3 meeting at IEM in Graz May 11th and 12th 2009. The THS is meant to be used to build and experiment with a number of sonic interactive artefacts particularfor such a setup.

This is a call for ideas on how the setup should be used und which aspects of SID could be illustrated or explored by them. Ideas who’s implementation need preparation should be formulated and communicated as soon as possible so that any development necessary could be realized before the meeting (a description of the technical infrastructure follows).

Ths_front

Goals

Here comes a list of initial goals of this collective experimentation sesseion. Please comment on them and add more if you feel inclined to do so:

  • develop sonic interaction designs for a particular setup (the THS)
  • produce sonic interactive artefacts as a basis for discussion in SID
  • sketch demos for final project presentation
  • define what can be understood by sonic interaction design in this
    context
  • compare design strategies / approaches, explore artistic approaches
  • experience prepared designs together, give feedback, evaluate designs
  • develop criteria for evaluation of designs in this restricted setup

Setup

Ths_table

The THS is a small speaker without an enclosure, which can easily be
held in one hand. Five tracking markers are attached to the speaker
with little rods. The speaker is attached to a power amplifier by a
long thin cable. The tracking markers allow the speaker to be tracked
in 6 degrees of freedom with high spatial (about 1 mm) and temporal
resolution (120 Hz). The tracking is realized with a 15-camera VICON
system in a volume of about 100 m3. Thus you may walk around with the
speaker and explore different spatial regions. As the high resolution
of the system is available in the whole volume, a very refined spatial/
sound/tactile control is possible. Since the speaker is held in the
hand, lower frequencies are also sensed haptically. The tracking data
is made available via OSC in the following format (and at a rate of
120 Hz):

/data Tx Ty Tz Ax Ay Az

  • position: Tx, Ty, Tz in carthesian coordinates in mm,
    the origin is located on the floor in the center of the room
  • orientation: Ax, Ay, Az in angle/axis representation

As sound synthesis platforms PD, SC, Max/MSP, or any other OSC-
compliant synthesis app may be used. This could either be run from a
laptop you bring along or from one of our machines (PC with Windows
and Linux or MacPro with OS X) in the lab where we are going to work –
the IEM CUBE:

Iem_cube_1024

Additionally to the THS setup, the CUBE is equipped with 24 speakers
in hemispherical setup (Tannoy System 1200), an array of 48 speakers
mounted on the ceiling, controllable in 16 groups/channels (JBL
Control 1), and 2 subwoofers. All 42 channels are accessible via MADI
from a MacPro (running Max/MSP and SC) and a Linux machine (running PD
and the CUBEmixer). Futher details on the CUBEmixer can be found here.

Cube_mixer_screenshot

[STSM] Getting stuff out of Jamoma – part I

April 20, 2009

One of the aims of Jamoma development is to enable querying of modules for vital information such as:

  • name of modules
  • name of module parameters
  • properties of module parameters
  • state of module parameters

This has to be automated, so that the system dynamically can respond to modules available the given time, as well as changes to module states.

Details of this work has been published in two papers for NIME 2008 and ICMC 2008.

The first task to solve in order to start using FTM for controlling Jamoma modules, will be to get information on and from Jamoma modules into FTM. And in order to do so, I have to be able to get the information out of Jamoma.

So the first task for today is to review how this is currently done in Jamoma, in order to build a prototype patch that can be used for interfacing to FTM.

[STSM] Getting stuff out of Jamoma – part II

April 21, 2009

Yesterday I set out to check out what current state is concerning the ability to retrieve information on OSC namespace etc. from Jamoma modules. Here’s an update. First part can be found here.

The C code used for core Jamoma externals seems to be in pretty good shape. I have not checked yet if all nitty-gritty details laid out in the NIME paper on OSC nodes last year are fully implemented yet in the C code of the externals, but as far as I can see the stuff I might need for the remainder of the STSM seems to be in place.

In addition to the externals there are a number of components (or abstractions in Max terminology) that is supposed to make life easier when dealing with querying and monitoring of Jamoma modules. These components were in much less of a completed state. I also found several parallel efforts towards some of the needs, and awkward naming of functions.

Get_all_attributes

I have spent most of today cleaning up. I have added a bunch of new components and more important: Written a new Jamoma tutorial on how to retrieve information from existing Jamoma modules. The screenshot is from one of the example patches in the module. Hopefully this will help me remember how this work further down the road…

Some work still remains to be done, mainly trashing old and deprecated components, and make sure that they are not used in any of the modules.

CopperLAN

I was skyping with Pascal Baltazar this morning to check if the changes I was planning might cause problems for him and ongoing Virage development.

Pascal also mentioned ongoing work on CopperLAN that sounds relevant and exciting. IRCAM seems to be connected to the project, and I think I might get the chance to discuss this further with Norbert Schnell one of the comming days.

[STSM] Day 3

April 22, 2009

A day roughly divided into three parts:

First part of the day cleaning up stuff in Jamoma that I worked on yesterday and the day before, breaking one or two modules in the process. I got a mail from Andrew Eales at Wellington Institute of Technology, asking for details concerning the query system for OSC we use in Jamoma. I tried replying as best I could, also including a link to the tutorial I wrote yesterday.

Googling reveals that he seems to be doing interesting work.

Long lunch with Norbert Schnell and Diemo Schwarz, discussing a number of pet subjects, such as relationships and mutual dependencies between research, development and artistic creation, if and how artistic development can constitute research, now, nows and exploded nows.

Yet another topic we touched upon was modularity and reuse in development. While writing up the annual report 2008 for BEK a few weeks ago, I noticed a discrepancy in the approach to software and hardware development. While software development aims for modularity and generality, so that code can be reused (an underlying principle of Jamoma development) hardware development is turning more and more project-specific, often also being incorporated as part of the visual aesthetics of the work. Below are two examples:

Alog Live at Museo Reina Sofia Madrid

 

Sonus barentsicus. Sound installation by Espen Sommer Eide for the Northern Lights Festvial Tromsø, Norway 2007.

 

Monday, as Diemo was presenting me to some (or many) of his colleagues at Ircam, I noticed the same when we stopped by the office of Emmanuel Flety. He was busy modifying some Adidas shoes for a stage production.

I guess the reason is pretty obvious; while software can be copied and reproduced, reusing hardware either mean using the same physical device again or building a copy, that is as long as one is not involved in industrial mass production.

Today for lunch, the question was whether composers and artists reuse methods in their artistic processes. No clear conclusions, but I believe that this is often the case.

The afternoon and evening was spent working with Nils Peters on a submission for SMC 2009. The paper is summarizing a lot of the work we have been doing for a number of years on spatialization, and the structuring principles underlying the modules.

I will be doing a Jamoma presentation early next week. Tim is doing a presentation of Jamoma tomorrow at the Expo74.

slow . simple

April 22, 2009

slow

The UK artist Ivan Pope is writing about slow art:

“Then I Googled it and found that Grayson Perry had in fact launched a Slow Art movement back in 2005. It doesn’t look like anything much has come of it since, but then that’s the nature of the beast. The Slow Art movement is now well and truly underway, and I’m signing up for it.”

 

Let’s kill speed, says the Turner prize-winning potter in the first of his new weekly columns.

As a producer of art I feel an increasing pressure to keep in step with our 24/7 culture-on-demand society, and as a consumer I am overwhelmed by a tyranny of choice. I hereby declare the launch of the Slow Art Movement (I have not hired a PR). Artists, I call on you to spend some quality time with a sketchbook before pointing the digital camera out of the car window. Think long and hard, perhaps even discuss your ideas in a Hoxton café before ringing up the fabricator and ordering that monument to a one-liner. Maybe even take the rebellious and increasingly fashionable step of learning how to make something skilfully with your hands.

Picasso set an awesome precedent by knocking out three art works for every day of his life but Vermeer is held in reverence for a surviving oeuvre that wouldn’t crowd out the wall space in a squash court. So I ask gallerists and curators not to expect artists to churn out cool stuff like some cultural ice machine. Often I plan to see a certain exhibition only to find it has been superseded in the blink of an art historian ’s eye by the next show. If we all spent longer thinking, making and looking perhaps less bad art would get made, shown and seen.

 

 

simple

Another discussion today reminded me of Ken Friedmans text on fluxus where he lists the twelve fluxus ideas. In particular I was reminded of the notion of simplicity:

 

Simplicity, sometimes called parsimony, refers to the relationship of truth and beauty. Another term for this concept is elegance. In mathematics or science, an elegant idea is that idea which expresses the fullest possible series of meanings in the most concentrated possible statement. That is the idea of Occam’s Razor, a philosophical tool which states that a theory that accounts for all aspects of a phenomenon with the fewest possible terms will be more likely to be correct than a theory that accounts for the same phenomenon using more (or more complex) terms. From this perspective of philosophical modeling, Copernicus’s model of the solar system is better than Ptolemy’s — must be better — because it accounts for a fuller range of phenomena in fewer terms. Parsimony, the use of frugal, essential means, is related to that concept.

This issue was presented in Higgins’s original list as minimalism, but the term minimalism has come to have a precise meaning in the world of art. While some of the Fluxus artists like La Monte Young can certainly be called minimalists, the intention and the meaning of their minimalism is very different than the minimalism associated with the New York art school of that name. I prefer to think of La Monte as parsimonious. His work is a frugal concentration of idea and meaning that fits his long spiritual pilgrimage, closer to Pandit Pran Nath than to Richard Serra.

Simplicity of means and perfect attention distinguish this concept in the work of the Fluxus artists.

 

[STSM] Controller interfaces

April 24, 2009

I have been bringing with me three controller interfaces to use for testing and development: A Wii remote, Wacom tablet and a Behringer BCF2000 automated fader. The latter is particularly handy as I will have instant physical and visual feedback when recalling preset values or sequences of events from Jamoma and FTM.

Bcf2000

I have done some testing of the Jamoma module for it this morning to ensure that it is all ship-shape.

There already exists a Jamoma module for the Wacom tablet, but I have not used it before. A quick look at it a week ago or so also seemed to indicate that it could have benefitted from being GFID-ed. The wii external for Max tned to get Max crashing all the time, so that one can wait.

The BCF2000 will give me more than enough to start with.

FiRe Sound recording app for iPhone

April 25, 2009

Fire

FiRe (iTunes link) is a new recording app for the iPhone and iPod touch platform, made by Audiofile Engineering.

[STSM] Getting stuff into FTM

April 25, 2009

Gettingstuffintoftm

Now that I am able to retrieve data from Jamoma, the next question is how to get it into FTM. Or more specifically:

  • What data formats can be useful for storing OSC information?
  • How do I manage to do so?

Yesterday was spent thinking, testing and discussing possible solutions with Diemo and Norbert. I had hoped to get further than I have so far by the end of the week, so although it is Saturday I have been spending the whole day working.

The conclusion so far on how to store information is to create a “mat” matrix and use as a central storage for all information on the parameters. As I am doing an automatic querying of the Jamoma OSC namespace, this matrix now automatically configure itself with one row per parameter, and a bunch of columns:

  • Column 0: OSC address of the parameter
  • Column 1: a sequence assigned to the parameter
  • Column 2, 3, 4, etc: The various attributes of Jamoma parameters.

The next idea is to be able to record a flow of parameters generated in Jamoma into one sequence per parameter, and then play them all back. This will provide me with a modular multitrack sequencing system for OSC addresses.

I have just realized that I need to redo the way the sequences are created, as I need to check what data type each parameter is, and assign sequences that are able to hold the adequate type:

  • Jamoma parameter type msg_int
    => sequence type int
  • Jamoma parameter type msg_toggle
    => sequence_type int
  • Jamoma parameter type msg_float
    => sequence type float
  • Jamoma parameter type msg_list
    => sequence type tuple
  • Jamoma parameter type msg_generic
    => sequence type tuple
  • Jamoma parameter type msg_symbol
    => sequence type ???

The BCF2000 module alone have 102 parameters, each containing a value and 12 attributes. It can easily become pretty massive amounts of data…

I am getting impatient as seeing the sliders move when I press play, but that will probably not happen before Monday. I have some other stuff to do tomorrow, a bit of BEK administration to take care of, and I would really like to take some time off and get to La Defense. Kandinsky and Calder are exhibited at Pompidou at the moment, and I haven’t been to Palais de Tokyo for a while. We’ll see.

Funding program for artistic research in Austria

April 26, 2009

The Austrian Science Fund (FWF) is about to launch a funding program for artistic research.

I have been invited to participate at an artistic research symposium on May 13th at Institut für Elektronische Musik und Akustik in Graz in May, and it will be interesting to learn more about this. This kind of program is sorely missing for further development of BEK. The Norwegian art and music educational institutiosn within higher education are lobbying for a similar program to be established in Norway, but I am unsure whether BEK as a smaller and independent not-for-profit foundation would be eligble to receive funding for artistic research projects from such a program.

[STSM] And lift-off…

April 27, 2009

Ftm_play_and_record

Recording and playback of OSC data is working. Now it all looks so simple.

[STSM] Jamoma presention

April 28, 2009

Wednesday April 29 2009 10:30 @ Sale Shannon, Ircam.

Trond Lossius will be giving a presentation of Jamoma as IRCAM.

The presentation is part of a short-term scientific mission to Ircam within The C0ST IC0601 Action on Sonic Interaction Design.

Jamoma is a framework for structured development of modules in Max, offering:

  • State handling of parameters
  • A common interface for communication to and from modules using OSC
  • Possibilities for remote communcation to and from modules
  • Parameters can be queried and monitored for their state and properties, thus enabling advanced control of modules

Jamoma is developed by a international team of researchers, developers, composers and artists, and lisenced as GNU LGPL.

Gestural research and work on spatialization have been of particular interest to several of the developers, and Jamoma is used for prototyping development of SpatDIF (Spatial Description Interchage Format) and GDIF (Gestural Description Interchange Format).

[STSM] Brainstorming

April 28, 2009

Brainstorming

Discussions yesterday after having managed to record and play back OSC streams created some instant responses for further work. The ability to save and read sequences to and from disk, preferably as SDIF files, tended to be the first suggestion brought up.

Instead of just rushing on to the next thing I could imagine doing, I have spent some time thinking of what possibilities and problems that arise.

trajectories . calder

April 30, 2009

Calder

One topic popping up in several discussions over the last week is the use of trajectories for describing positioning and motion of sound in space. When we experience sound surrounding us, how do we perceive it? Do we organize and understand spatial sound as traces in space-time, as points, as objects with an extended body, as energy or in other ways? Is the use of trajectories for composing in space derived from real-world experience of the sounding space, or from how spatial data can be represented mathematically?

Yesterday I saw the ongoing Calder exhibition at Centre Pompidou. This is an exhibition of trajectories. Calders wire sculptures are drawings in space, demarcations of the physical space occupied by the individuals and animals he set out to portrait.

Another artist comes to mind, the UK/Deutch sound artist Justin Bennett. Combining drawing and spatial audio, sound is drawn in space in several of his work, literally by how loudspeaker cables are laid out, or on more abstract levels, as in the Europa project where the internal EU borders are presented as drawings as well as being used as trajectories traversing a field of sound recordings from the different sides of those borders.

Pascal Baltazar’s performance of Pyrogenesis at the Borealis festival in March gave me similar associations, providing one answer to a question I have been asking myself for a while: Why are so many of the composers concerned with spatial sound also deeply involved in exploring gestures in music and new interfaces for musical expression? Using a wacom tablet, Pascal was literally drawing the space, creating invisible lines extending beyond the stage, cutting into and through the space occupied by the audience.

[STSM] Designing models

April 30, 2009

The STSM is drawing to a close and I am heading back to Bergen tomorrow. A major part of the last few days have been spent thinking about how to organize data in FTM, and sketch various possibilities.

States_and_sequences

There are two basic classes of raw data that I want to be able to capture, representing vertical and horisontal reading and control of the data flow:

  • Instant snapshots of current state.
  • Sequences of timed events.

There are several approaches to how to capture and work on both of these.

Storing states

Instant snapshots could be read into a dict (dictionary) or a mat (a matrix containing any kind of data). Often only the single-value parameters will be of interest. These could be collected into a fvec (vector) or a column or a row of a fmat (float matrix) , provided that another dictionary could be used for mapping between matrix rows or columns and OSC addresses. It is more efficient to append rows than columns to a fmat, so rows would probably be the preferred format.

Storing sequences

There are no less options concerning sequences. FTM provides two time-tagging data structures that could be used, track and sequence. The conclusion from discussions over the past week is that sequence will be preferable to track.

My approach so far has been to use one sequence per parameter. An alternative solution would be to record all parameters into the same sequence as tuples by combining the OSC address and the argument(s)/value(s). This sequece could then be post-prosessed and each parameter distilled from it.

For a while it has been considered to implement support for a multitrack sequencer in FTM. That would be an attractive solution, ensuring that all data is bundled in one object, while still providing direct access to the individuall OSC parameters.

Resampling and interpretation of data

Sequences are recorded using variable sample rate. Some of the processing I want to be able to do require sequences to be resampled with fixed sample rate. Deppending on the application I can imagine needing different sampling rates; I might want to align sequences by resampling them with a shared sample rate, or with a constant total number of samples, in spite the fact that durations might vary.

Segmentation of events would be a more advanced analytic approach.

Aggregated data

When resampling sequences it would be useful to be able to store them as vectors with additional meta information attached, e.g. OSC address, sampling rate or duration, in a similar fashion to aggregated data in RubyOnRails web applications:

“Database columns have a limited set of types: integers, strings, dates, and so on. Typically, our applications are richer — we define classes to represent the abstractions of our code. It would be nice if we could somehow map some of the column information in the database into our higher-level abstractions in just the same way that we encapsulate the row data itself in model objects.”1

We have discussed extending FTM classes to support meta information. Under the hood fmats already supports certain specific meta information required by some of the Gabor processing algorithms, in particular PSOLA.

For the time being the best solution will probably be to use a dictionary representing the aggregated data, and ensure that it contains a reference to the sequence or vector itself.

Data and Model-View-Controller (MVC) design

A series of time tagged values can be represented in at least three different ways in FTM; as a sequence, a bpf (break-point function), or as a fmat with two columns. Regardless of the representation, each record is a point in the two-dimensional time-value coordinate system.

Model-View-Controller (MVC) design have been a source of inspiration for structural design of Jamoma. Jamoma do not yet apply true MVC separation, but we are moving in that direction. For Jamoma development MVC have been used to separate the algorithm of the module (the subpatch doing whatever the module is supposed to do), the view (GUI and OSC communication are two views for interacting with the patch) and the controller, made up by the core Jamoma externals handling parameter state and internal and virtual communication within and between patches as well as the OSC communication taking place internally between these externals and the algorithm.

Maybe MVC principles could also be applied to the data structures themselves. The sequence, break-point function and fmat could be considered different views of the same underlying model. The Jamoma dataspace library could be though of in simialr terms. The dataspace library enable dynamic mapping between different units. For instance gain levels can be described as linear, MIDI or dB values. These could all be considered different views of the same underlying data. In many ways the C++ code of the dataspace library work this way already.

How would data be structured in a database application?

Classes_as_models

Recently I developed a new web site for BEK using RubyOnRails, getting additional assistance from Espen Sommer Eide. Rails use a database for storing the dynamic content of the site, and a major design issue was how to construct the database models. Rails suggest certain patterns for linking tables that I find productive when studying relationships of data in other contexts as well.

  • One-to-one: A one-to-one association (…) is implemented using a foreign key in one row in one table to reference at most a single row in another table. A one-to-one relationship might exist between orders and invoices: for each order there’s at most one invoice. 2
  • One-to-many: A one-to-many association allows you to represent a collection of objects. For example, an order might have any number of associated line items. 3
  • Many-to-many: Finally, we might categorize our products. A product can belong to many categories, and each category may contai nmultiple products. This is an example of a many-to-many relationship. It’s as if each side of the relationship contains a collection of items on the other side. 4

One-to-many relationships seems meaningful for a number of applications. There might be several recorded sequences of one parameter, several resamplings of a sequence, or several recordings of states for the same parameter(s). The diagram above tries to envisage how such relationships could be designed in FTM. In FTM I would probably need to use dictionaries as joints between models.

How to proceed? Do I try to design a system from top downwards, or rather from bottom up, starting out with the more primitive structures? I guess I will do a bit of both. The emhasis will be on prototyping, a modular exploration of possibilities and solutions through development that is not aiming at immediate fixed solutions of general use, rather a method for further research of problems and possibilities.

References

1 Ruby, Thomson & Hansons: Agile Web Development with Rails. 3rd Edition. The Pragramtic Bookshelf, 2009, p. 348

2 ibid. p. 338

3 ibid. p. 338

4 ibid. p. 339