NewsWorksSoftwareTextBioContact
background image

Blog archive for June 2010

Reverse engineering Quartz Composer

June 2, 2010

Qcinterpolates

For me finding ways of rigorously figuring out how the logics work (aka “reverse engineering”) has always been an important aspect of learning programming in new languages and environments. I find myself doing this once again in Quartz Composer.

Above is a simple patch for investigating how Quartz Composer deals with images that are resized. Does it interpolate between pixels or not? In this patch I load an 8×6 pixel checkerboard image with squares sized 2×2. From the output it is obvious that QC do interpolate. If not the edges would remain hard.

The real use of this is for an application I am developing for a video screening program where we will have to deal with a mix of HD and SD resolution video. We want to maintain the maximum resolution possible for all of the videos, and the solution is to use high resolution for the projections able to deal with HD, and with SD resolution images being dynamically up-sampled. Ideally one could dream of higher quality approaches than mere linear interpolation, but at least we’ll avoid pixelation of the SD images.

[HOWTO] Demultiplexing DVD vob files for reauthoring

June 10, 2010

Demultiplex_dvd_vob_file

Note to self:

For the video screening part of the Contemporary Artists from South Africa exhibition currently on display at Galleri 3.14 we needed to re-author a DVD. The received DVD contained a number of movies that could be selected from a menu, but for the screening we rather wanted the program to play back continuously with looping. Although I have done it before, I had forgot the steps required, and had to do some googling before figuring out how to extract the m2v and audio files from the TS_VIDEO folder of the DVD so that we could re-author the DVD without ripping as a de- and re-compressing process, loosing lots of image resolution in the process. So for next time, here’s the HOWTO:

  1. Download and install MPEG Streamclip.
  2. Open the TS_VIDEO folder of the DVD disk image. Investigate the content, looking for .vob files. The big ones are most likely the ones you want.
  3. Drag one of the .vob files across the MPEG Streamclip interface
  4. Choose File > Demux > Demux to M2V and AIFF. If you suspect audio content to me AC3, you should Demux to M2V and AC3 instead.
  5. Select where to save, and take a coffe while it process the .vob file.
  6. You’ll end up with a pair of .m2v and .aif (or .ac3) files that can be imported into DVD Studio Pro. You will have lost no quality in the process a s compared to the DVD you started from.

The above most likely won’t work with copy-protected DVD content, at least I have never checked to see if it do work.

Tests with very wide screen videos

June 10, 2010

Earlier on I have collaborated with Karen Kipphoff on two works, the installation Staged Bodies (Bergen kunsthall, 2005 and Høstutstillingen, Kunsnernes hus, 2006) and the stage work Floating characters (BiT Teatergarasjen, 2007).

Both of these projects used high-resolution wide-screen video projections. For Staged bodies the projection was created using two projections next to each other, with a total pixel resolution of 2048×407. A limitation of the system was a line at the verge between the two projectors. For Floating characters we developed the system furhter, adding edge-blending to create a seamless transition in the overlap region between the two projectors.

At the time being we did not find any standard video editing tools (such as Final Cut) that would enable us to work with the desired format. Instead the video playback system was developed using Max/Jitter with OpenGL GPU acceleration, wrapped up as a number of Jamoma modules.

20m_igloo_dome

The Igloo 360-degree digital projection environment.

 

In the fall we will be doing yet another (actual two) new projects as part of the ICCI 360 Festival in Plymouth. The festival will present an innovative programme of screen based multimedia and performance work from leading international 360 panoramic film makers, photographers and designers and many local and national artists and organisations.

The festival arena, consisting of a huge 20m diameter dome incorporating a 62m x 6m high projection screen (the size of 7 double decker buses!) and performance space, will use cutting edge digital technology with surround sound and high resolution projection facilities to provide a fantastic immersive 360 panoramic digital experience for audiences of all ages.

Inside_igloo

Inside the Igloo. From a previous screening.

 

The total format of the required video is quite impressive, using 5 video projectors for a total resolution of either 9600×1080 or 5120×768 pixels. Following from our previous work, we’d like to be able to deal with all five video projections as one canvas, rather than having to author 5 parallell and independent channels. This makes it much easier to work on images stretching across several projectors, or moving and rotating along the canvas.

Over the last two days I have been checking out possible solutions for how to develop the video material. The most obvious option that comes to mind is Final Cut. Although support for HD has improved a lot over the later years, it still doesn’t seem to quite cut it. From the tests I have done so far, the highest resolution of sequences seem to be 4096×4096 pixels, matching the 4K RAW format of the RED ONE camera.

I have also looked into Avid Media Composer. More work is to be done on this, but so far I have found no indications of support resolutions past standard HD (1980×1080). The upcoming v.5 seems to support RED as well.

Rendering_test

P=. Prototype for rendering to 9600×1080 in Max.

 

The solution I have a working prototype for, is Max and Jitter. Although computation is demanding, Jitter supports float32 9600×1080 matrixes, and I am able to record them using the jit.qt.record object.

Initially it seemed as if movies would not play back if the width is higher than 8192 pixels. In QuickTime Player the videos just shows up as being black, and for a while I was wondering if I was banging my head against a hard set resolution limit in either QuickTime or the codecs. However, using Max for playback as well, I am able to play back the videos, as illustrated below.

Playback_test

Prototype for video playback in Max.

 

This implies that it might be possible to generate video material in Max, record to one large movie, and later cut into 5 movies, one for each projector. The format is of course way to demanding to be able to deal with in real time, but using scripting of cues, it might be possible to set up a system that can be used to author the video sequence, and afterwards rendered.

It seems a very good idea to try doing something similar to “offline editing” in Final Cut:

Sometimes called the offline or story edit, this is the stage when a program’s creative content is explored, shaped, and eventually refined to the point where the cut can be locked in preparation for finishing. This process is referred to as an “offline” edit because, for disk space and performance reasons, media is often ingested at a significantly lower quality than the final master will be finished at. Although many editors still follow this workflow (especially for formats at high resolutions such as 2K and 4K that are processor-intensive and require a lot of disk space), it’s becoming increasingly common for programs to be ingested and edited at the final level of quality from the very start.

From Final Cut Studio Workflow

There are some major challenges that need to be resolved in order to be able to go down this route. Possibly the biggest is that I’ll need to be able to control all timing processes in Jamoma (in particular ramps and triggering of cues) using a non-real-time clock, so that time can progress in non-real-time according to the progression of frames. This has been on the TODO for Jamoma for several years (as I have needed it in past projects as well), but I have not so far looked into the SDK for timing objects in Max5, where anything to do with tempo was heavily revamped.

 

Update (2010-07-14):

Karol Kwiatek points out that Adobe After Effects is capable of dealing with videos of this resolution.

Musique d’ameublement

June 11, 2010

Amp-speaker_48

Kube_12

TODO: Refurbish my home – Both of the above are loudspeakers or funiture, depending on your interests…

Via bornrich (thanks, Nils!)

ICMC Proceedings available online (at last)

June 15, 2010

The proceedings of ICMC conferences 1997 – 2008 have finally been made available online:

http://quod.lib.umich.edu/i/icmc/

 

Update 2012-03-06:

The URL has moved and have been updated. Thanks to Chris Powell of
University of Michigan, Digital Library Production Service, for the notification!

New Zealand Electroacoustic Music Symposium (NZEMS) 2010 Call for Presentations of Research / Works

June 19, 2010

The New Zealand Electroacoustic Music Symposium (NZEMS) 2010

Time and place:

From 1-3 September 2010 The School of Music University of Auckland will host a 3-day research symposium on the topic of Electroacoustic Music. Several of New Zealand’s prominent Composer-Researchers will be in attendance including Phil Dadson (TBC), John Elmsly, Eve de Castro Robinson, John Coulter, Ian Whalley, Susan Frykberg, Michael Norris, John Cousins, and Chris Cree Brown. Professor John Young (DMU) will be delivering the keynote presentation. As a special feature of the symposium a 26-channel discrete ‘acousmonium’ will be installed in Studio One Kenneth Myers Centre 74 Shortland St for the duration of the 3-day event.

The special theme of NZEMS 2010 is ‘ Multi-Channel Electoacoustic Music’

Research in the field of multi-channel electroacoustic music continues to advance at an alarming rate. The once standard 8-channel speaker configuration has now given way to a range of multi-speaker spaced and zoned arrays as variable as the creative works presented on them. Multi-zone and ambisonic field recording has become a typical method of acquiring source materials, and new tools for multi-track spatialisation and transformation are constantly being developed. Hyper-instruments too, many of which are designed to capture human gesture, have made their way into the multi-channel production process, while in the context of live performance, the combination of acoustic and multi-channel electroacoustic instruments is providing vocal and instrumental composers and sonic artists alike with pioneering opportunities.

The scholars of acoustic and spectral space (Bayle, Bregman, Emmerson, Haas, Hall, Lennox, Oliveros, Russo, Schafer, Smalley, et al) remind us that the language of the domain is far from arbitrary – rather, that the effective aesthetics we experience in listening to multi-channel works is founded on more general principles relating to human genetics and experience. Several questions arise: Is the articulation of space at the heart of the language of electroacoustic music? Are there different types/genuses of spaces? What is the relationship between the proximity/location of loudspeakers and the proximity cues of the musical materials? What importance does sound spectra hold in the reception of spatiality? Is the division of space into ‘zones’ a useful heuristic procedure? How does the presence of a human performer impact on the space of a live multi-channel performance? How is space perceived in multichannel sound/multimedia installations (where participants are free to roam within a multi-channel sonic environment)?

Call for presentations of research

Presentation of research are called for concerning all aspects of multi-channel electroacoustic composition. However, submissions are not limited to this field. Research presentations from the following domains are also welcome:

  • Performance-Based Electroacoustic Music / Sonic Art (with live electronics and/or acoustic instruments and/or dance)
  • Acousmatic Electroacoustic Music / Sonic Art
  • Electroacoustic Music with Moving Images,
  • Interactive Installation / Sonic Sculpture,
  • Electroacoustic Music / Sonic Art with other disciplines.

Each spoken presentation will be 20-min in duration with 10-min reserved for questions. The inclusion of creative work as part of the presentation is encouraged. Stereo playback and data projection will be made available to all presenters. A basic eight-channel playback system will be made available to presenters on request.

Associated events:

Delegates are welcome to submit creative works for inclusion in the concert series; however, space in these events is extremely limited, as a number of high-profile New Zealand composers have already accepted invitations to present.

Concerts for Diffused (stereo) Works (seating for 60)

  • 1pm 1 September, Kenneth Myers Centre – Young Composers Lunchtime Concert (acousmonium)
  • 1pm 2 September, Kenneth Myers Centre – Video Works Lunchtime Concert (acousmonium)
  • 1pm 3 September, Kenneth Myers Centre – Established Composers Lunchtime Concert (acousmonium)

Concert for Live Works – SONIC ART 2010

  • 7-9pm 2 September, School of Music Theatre (multichannel system available)

Concerts for Multi-Channel Works – repeat performances (seating for 9)

  • 6-9pm 1 September, Kenneth Myers Centre – (acousmonium)
  • 2-6pm 3 September, Kenneth Myers Centre – (acousmonium)

Guidelines for submissions

The deadline for receipt of proposals (abstracts and biographies of contributors) is Friday 30 July 2010. Submissions are to be made electronically to nzems@auckland.ac.nz. Send abstracts of 200-300 words plus a short biography. Please ensure that your name, institutional / organizational affiliation (if any), contact address, telephone, and preferred e-mail address are included on the abstract. Paper acceptance decisions will be emailed to applicants by Friday 6 August 2010

Registration details

To register on-line please visit http://www.creative.auckland.ac.nz/nzems. All NZEMS events are free, with the exception of the Sonic Art concert on 2 September ($15). Concerts for Multi-Channel Works will be closed to the public (available to NZEMS delegates only)

Contact details

For further details including programme information please visit http://www.creative.auckland.ac.nz/uoa/nzems or contact the NZEMS events manager directly. nzems@auckland.ac.nz

The organisers would like to thank the Australasian Computer Music Association (ACMA) and the Composers Association of New Zealand (CANZ) for publicising the event.

John Coulter
Lecturer in Music
Head of Sound Programmes
School of Music
National Institute of Creative Arts and Technologies
University of Auckland

M4L for Dummies

June 19, 2010

M4l_for_dummies

Next week I’ll be doing an audio-visual performance at Guernsey with jeremy Welsh, a part of ISIC 6 6th International Conference on Small Island Cultures. The conference presentation format is a pretty weird point of departure for an artistic contribution, but hopefully it will work out well as an artistic diversion or intervention in between presentations of papers.

Most of the work I do is installations, where sound (and occasionally video) is being generated continuously using various generative strategies, sometimes with some kind of live input and interaction. I haven’t done live audio since 2006, when the Tracker collaboration with Frode Thorsen and Gitte bastiansen was touring the region of Oppland. Live performance need a different approach in terms of performance system to my installations, where the system is just left there to do its thing.

When doing projects, I also try using them as an opportunity to invest effort into long-term learning and investigations that might continue to pay off after the end of the project. This time I have decided to get into Ableton Live and Max 4 Live. This has been on the TODO-list for a long time now. Hopefully this will open up for a different approach (for me) to live laptop improvisation.

Running through the M4L tutorials, I am amazed at the strong aesthetic implication of the program: All tutorials results in a kind of non-personal, de-individualized, techno-light ambient music, designed for chill-out café atmospheres and perfect for early afternoons at Landmark.

 

Note to self: The name of the project is Ghost Architectures, not My Life in the Bush of Ghosts. Yo, man, get those loops out of sync NOW!

Ghost architectures (2010)

June 24, 2010

An audio-visual performance by Jeremy Welsh and Trond Lossius, based on material recorded around the sites of WWII German fortifications on islands off the west coast of Norway, near Bergen.

The work was presented at the 6th International Conference on Small Island Cultures Guernsey, June 22 – 25 2010.

Ghost Architectures

June 24, 2010

Bunkers_-_098

Bunkers_-_030

Jeremy Welsh and Trond Lossius is participating at the 6th International Conference on Small Island Cultures Guernsey, June 22 — 25 (2010).

They will be presenting Ghost Architecture, an audio-visual performance based on material recorded around the sites of 2nd. world war German fortifications on islands off the west coast of Norway, near Bergen.

As part of their strategy to control the shipping lanes of the North Sea/North Atlantic, the German forces built fortifications, bunkers and airstrips all along the Norwegian coast. Today, many of these structures still remain, most of them disused relics, overgrown and almost forgotten, but some of them preserved as museums.

Jeremy Welsh has been photographing in and around these relics for a number of years, while sound artist Trond Lossius recently created a site-specific audio installation in a deep tunnel beneath one of the largest subterranean fortresses on the island of Sotra, near Bergen.

The presentation will combine elements of the audio materials used in the installation with photographic images, video and text to realize a work that is more about the experience of these places than a historical account of their wartime significance.

 

The project is supported by Bergen National Academy of the Arts and BEK.

Multicable color codes

June 28, 2010

Hosa8way

Using 8 way cables all the time, I have been wondering for years if there is any recommendation or standard for what color goes where. The other day I accidentally bumped across a proposal from the German Surround Sound Forum, referenced in table 4.1 in Rumsey (2001)1:

Track Signal Comments Colour
1 Left Yellow
2 Right Red
3 Centre Orange
4 Low Frequency Enhancement Additional sub-bass and effects signal for subwoofer, optional Grey
5 Left Surround -3 dB in the case of mono surround Blue
6 Right Surround -3dB in the case of mono surround Green
7 Free use in program exchange Preferable left channel of a 2/0 stereo mix Violet
8 Free use in program exchange Preferable right channel of a 2/0 stereo mix Brown

1 F. Rumsey (2001): Spatial Audio. Focal Press

 

Update (2010-07-14):

Pascal Baltazar pointed me to a Wikipedia article on surround sound that provides a different color code, quoting the ANSI/CEA-863-A standard from 2005:

Track Signal Colour
1 Front left White
2 Front right Red
3 Center Green
4 Low frequency Purple
5 Surround left Blue
6 Surround right Grey
7 Surround back left Brown
8 Surround back right Khaki