To the best of my knowledge Real-Time Composition Library for MaxMSP by Karlheinz Essl has not been ported to Windows. Most of the library consist of patches and these are easy to port but he is also using some externals provided by others. Today I hacked together an abstraction working the same way as divmod. It can be downloaded here.
Blog archive for November 2005
And for some reason I couldn’t help noticing this news story on the same day.
After some months on Tiger I’m still not impressed by the “improvements” emphasised in marketing. Looking at the top of the Mac OSX web page I realise that I very seldom use Spotlight and much prefered the old Finder system completely avoid Dashboard use Mozilla instead of Safari Skype instead of iChat AV and I quickly gave up on Automator because it didn’t do what it claims (complex scripting made simple). I have no use for a .Mac subscription as the services provided by the BEK server suits me much better and I ***HATE*** Mail. The’s seven out of eight…
The only good thing about Mail is spell checking but the rest makes it the worst mail client I’ve ever experienced. And of course it converted all mail boxes to a proprietary format. Now I might have found a way to escape the trapp once I can spend a few days offline while converting all mail boxes. The emlx to mbox Converter seems able to convert mails to mbox format so that they can be imported in Thunderbird. Might be a proper Christmas cleening project for second half of December.
Thursday the two week seminar on textiles and acoustics came to an end. In the afternoon me Jon and Tor visited the three student groups to see and discuss their projects and in the evening the students did an open presentation at St. Jacobs kirke. Two of the groups had been working in various rooms in this building while the last one worked in one of the lecture rooms of KHIB in Marken 37.000 At the end of the public presentation Fast Forward currently guest teacher at the Departments of Fine Arts did a permormance playing steel drum. Jeremy Welsh has posted a picture at his brand new blog built from fragments.
To me this have been a very useful experience. I haven’t had a lot of time to blog lately and there are some major topics concerning sound installations and space that preoccupy me at the moment and that I have not had a chance to write anything about so far. But the experience of developing the White-out installation at USF Visningsrommet recently art works I saw at the Lyon biennial (in particular the installations by La Monte Young/Marian Zazeela and by Brian Eno) and this workshop all pull in the same direction: A fundamental questioning of the setting for my works.
I’ve finally gotten around to change the RSS feeds for the blog so that they now provide the full content of the blog posts including images and not only an extract. It was not that hard really after finding useful hints at the English language CoreBlog mailing list searching for RSS.
Next thing on the ToDo-list: Add methods so that my del.icio.us feed is displayed as part of the blog.
I’ve received some reports of difficulties launching the tl.objects on Mac OSX lately so I’ve now uploaded them as a disc image as well. I have also uploaded a rar archive for Windows. Unfortunately not all objects are ported to Windows as I don’t have a Windows compiler nor the time or know-how. Everything can be found here:
ChucK is a new audio programming language for real-time synthesis composition and performance which runs on MacOS X Windows and Linux. ChucK presents a new time-based concurrent programming model which supports a more precise and fundamental level of expressiveness as well as multiple simultaneous dynamic control rates a precise and straightforward concurrent programming model and the ability to add remove and modify code on-the-fly while the program is running without stopping or restarting. It offers composers researchers and performers a powerful and flexible programming tool for building and experimenting with complex audio synthesis programs and real-time interactive control.
Many software environments have been developed for computer music. Programming environments typically provide constructs to implement synthesis or musical algorithms whereas runtime environments allow performers to exert parametric control over their programs onstage in real-time. Audicle present a new type of audio programming environment that integrates the programmability of the development environment with elements of the runtime environment. The result called the Audicle is a novel integration of a concurrent smart editor compiler virtual machine and debugger all running in the same address space sharing data and working together at runtime. We believe this augmentation has the potential to fundamentally enhance the way we write and visualize audio programs both offline and on-the-fly.
Our model of the Audicle is integrated with the ChucK programming language and inherits many of its fundamental properties including: decoupling of data-flow and time concurrency and modularity for on-the-fly programming. The Audicle not only provides a useful class of programming tools for real-time composition and performances but also motivates a new type of on-the-fly programming aesthetic one of visualizing the audio programming process.
I’m currently working on a project in collaboration with Karen Kipphoff to be part of the “Tracing Spaces” exhibition opening at Bergen kunsthall next Friday. We will be using a large video projection sized 2048 × 407 pixels. It will be composited in real time using Jitter made up from several layers and with some processing of the various layers: blending cropping rotating and moving layers along the x axis as well as chroma keying. It is very demanding on the computer so I have spent a lot of time figuring ways to optimize the processing. We will be using a PC Shuttle with an ATI Radeon 9800 graphical card and split the image to two projectors.
Parts of the processing will take place on the CPU but I am also using OpenGL processing to move parts of the processing to the graphical processing unit (GPU). Here are some of the solutions I have come up with so far:
The background image will be rotated in the x-direction. I first used jit.rota but then found that chopping the image in two and mount again using srcdim and dstdim double the frame rate. I guess the reason is that jit.rota is capable of performing more complex rotations and distortions than we need. As many of the parameters are left unused in our case I suppose that there will be some redundant processing going on in jit.rota that we avoid by using only jit.matrix with sourcedim/dstdim magic applied.
Using colormode uyvy reduces the dimension of the matrix from 2048×407 to 1024×407 again doubling the frame rate. The downside is that rotation now can happen only in steps of 2 pixels in the horizontal and might appear a bit less smooth. On the other hand one pixel projected onto the wall will equal approx. 0.400 cm so I assume that we can live with it. Poor framerate would be much worse.
Yesterday Joshua Kit Clayton posted a slab for hardware processed convertion of uyvy matrix to texture on PC on the MaxMSP list. Compared to software convertion to texture on PC it is twice as fast.
Several of the layers will change sometimes but not always. An important part of the tweeking is to avoid unnecsessary processing when no changes are taking place. One way of doing this is by using “zl reg” to store the name of matrixes and gl_textures and pull these instead of the jitter objects themselves when static so that I do not force unnecsessary recalculation of matrixes or textures.
I’ve also been investigating the data flow in Jitter and OpenGL rendering to avoid processing twice. The pacth below is quite instructive. I have set all jit.gl.objects to “@automatic 0”. This way I can control the sequence of rendering to make sure I get correct layering. Check e.g. the jit.gl.videoplane-xfade example coming as part of the MaxMSP/Jitter install to see how this is working. In addition I can avoid tasks being processed several times for each frame. E.g. the patch shows that when jit.gl.texture receives a jit_matrix the matrix is copied to the memory of the object. This probably costs a bit of CPU/GPU. If the incoming frame do not change there’s no need to do it again. Also the patch illustrates that if jit.gl.texture receives a matrix it outputs the new gl_texture. In a more complex patch this might force other jit.gl.objects to update as well. I’m not sure about this but I fear that they migth end up doing it more than once per displayed frame.
A lot of these optimalisations would not be possible in the previous version of Jitter (1.2).
Working in Jitter generally implies a lot of work of a very technical mode it is not necsessarily fast and intuitive. On the other hand computers are still quite slow for dealing with real-time video processing if you carfe about resolution and quality. Let’s face it: 8 bit color resolution 25 frames or so a second and 2048 pixels wide is still a very low resolution I’m convinced that we will at some point be able to work on float32 color resolution 50 frames per second or more and ten times as detailed images. At the moment one have to find smart and creative solutions to get the maximum results of the hardware at hand. Out of necsessity this requires work on a fairly technical level dealing with and exploiting hardware and software implementation. I don’t mind. Ever since I started using MSP I have been thinking of sound as a stream of samples and dealt with the maths implied in digital sound processing. It is the same way when dealing with video I find it more interest to deal with images as a bunch of pixels rather than working on abstract image processing simulating analogue photo techniques.
One last thing: I’ve made a custom ramp object that’s differing from the Max objects line and bline. When ramping to a new value it takes place in a fixed amount of time same way as for line. But it only outputs current value in responce to a bang received sort of same way as bline. It is using the CPU clock to get current time thus avoiding problems if the Max clock friifts out of sync. The benefit of the object is that I can ramp images to new settings in a fixed amount of time. When using bline transitions typically gets slower if the framerate drops. Compared to using line I will always get the most updated value of the parameter when the frame is calculated. I have also added several ramp functions in addition to the straight line: cosine tanh sinh and exponential. I haven’t had time to include in tl.objects yet but I will do as soon as I get the time probably not untli the beginning of December.
All video processing will be organised in a numjber of custom Jamoma modules to simplify high-level control of parameters.
This must be the techiest blog entry so far….
#P user gswitch2 447 302 39 32 1 0 #P user gswitch2 447 179 39 32 0 0 #P button 476 113 15 0 #P button 458 263 15 0 #P button 457 153 15 0 #P window setfont “Sans Serif” 9.000 #P number 83 91 35 9 0 0 0 3 0 0 0 221 221 221 222 222 222 0 0 0 #P window linecount 1 #P newex 387 338 52 196617 r dothedo #P newex 303 93 52 196617 r dothedo #P newex 105 158 52 196617 s dothedo #P user ubumenu 275 68 141 196617 0 1 1 0 #X add 0 – only update videoplane #X add 1- update texture #X add 2 – update movie #X prefix_set 0 0 0 #P newex 275 117 38 196617 gate 2 #P message 497 326 34 196617 reset #P newex 494 348 84 196617 jit.gl.handle test #P user jit.fpsgui 24 223 60 196617 0 #P user jit.pwindow 493 175 82 62 0 1 0 0 1 0 #P toggle 36 91 15 0 #P newex 36 136 149 196617 t b b erase #P newex 36 113 57 196617 qmetro 33 #P newex 36 199 171 196617 jit.gl.render test @ortho 2 @sync 0 #P newex 36 262 150 196617 jit.window test @size 400 300 #P newex 476 374 255 196617 jit.gl.videoplane test @scale 1.333 1.000 1.000 @automatic 0 #P newex 476 263 154 196617 jit.gl.texture test @automatic 0 #P message 538 124 30 196617 read #P newex 476 151 105 196617 jit.qt.movie 320 240 #P fasten 17 0 3 0 392 367 481 367 #P connect 2 0 20 0 #P connect 2 0 23 1 #P connect 23 1 3 0 #P connect 22 1 2 0 #P connect 0 0 19 0 #P connect 0 0 22 1 #P connect 0 0 9 0 #P connect 21 0 0 0 #P connect 18 0 6 1 #P fasten 7 0 10 0 41 158 29 158 #P connect 7 0 5 0 #P fasten 7 2 5 0 179 183 41 183 #P connect 11 0 3 0 #P fasten 13 0 2 0 280 249 481 249 #P fasten 13 1 0 0 308 142 481 142 #P connect 16 0 13 1 #P connect 7 1 15 0 #P connect 14 0 13 0 #P connect 12 0 11 0 #P fasten 1 0 0 0 543 145 481 145 #P connect 6 0 7 0 #P connect 8 0 6 0 #P window clipboard copycount 24
The interactive installation “The art of Walking on Water” by Jana Winderen myeslf and Jørgen Træen will be exhibited at Huddersfield Art Gallery as part of Huddersfield Contemporary Music Festival. This year there is a strong precense of Norwegian composers and musicians at the festival. The installation will be up for three days starting coming Saturday and Jana Winderen will be present while the installation is running.
Unfortunately artistic crediting is incomplete on the web pages of the festival only Jana is mentioned. Neither did Ny Musikk get it right for their press release today. I don’t understand why it is so difficult for producers to do the artistic crediting properly.
The Musical Gestures Project at the University of Oslo has established a collaboration with the Input Devices and Music Interaction Laboratory McGill University. December 5-7 they are holding the first in a series of UiO-McGill workshops focusing on basic issues of gesture definitions and gesture capture/representation. More info here.
For those who cannot attend the whole workshop but are interested in getting an overview of the field there will be two open guest lectures:
Monday 5 December 14:00-16:00
Musical Gesture Research at McGill
Marcelo M. Wanderley Head of Music Technology Area
Schulich School of Music McGill University Montreal Canada
Wednesday 7 December 14:00 – 16:00
Laban Movement Analysis
André Austvoll dancer choreographer and movement analyst
I’m a slow adopter of new technology in the sense that I want to spend time with it in order to get to know it thoroughly how it work and what kind of results it can produce before I really can start get going with it.
Electrotap released Hipno a little more than a month ago a set of audio plug-ins based on the Pluggo arcitecture. So far I have only had the time to investigate two of them Amogwai and Brita. They both seems able to produce interesting results. Brita is basicly a spectral noise gate. This patch gives a fairly straight forward illustration of how it works technically. The gate can also be inverted so that only spectral bands with little energy are let through. Brita use a smoothing spectral amplitude follower that can avoid bands from jumping in and out all the time. Noise reduction is one obvious use of the plug-in but it can also create some interesting and more abstract effects. If the responce is set to a high value the filter takes long to adjust to new values and often creates some interesting results along the way. If Brita is used to process audio in Peak this might give some surprises as the responce menory seems to be cleared each time audio processing starts. With high responce values I’ve instead had to use radiaL if I want to record the effect after Brita has reached a state of eqiulibrium as radiaL can keep the source file looping while Brita adjusts.
Brian Eno apparently have three different installations running here at the moment according to a mail at the nervenet mailing list.
Tomorrow a non-stop program of mazuk art muzak ambient easy-listening etc. will be presented at Galleriet one of the major shopping centers in Bergen. Press release in Norwegiasn below.
L Y D G A L L E R I E T P R E S E N T E R E R
Kunstmuzak Inc. – heismusikk med sjel
Sted: Kjøpesenteret Galleriet i Bergen
Tid: 25.000 november 2005 10-19.
Foredrag kl 1900 ved Erlend Hammer og Steinar Sekkingstad i Galleriets 1.000 etasje.
Kontakt: Erlend Hammer – 91746702 Jørgen Larsson – 48237888.
Kunstmuzak til handlende bergensere
Fredag 25.000 november blir kjøpesenteret Galleriet omgjort til et lydgalleri. Den nystartete kunstnergruppen Lydgalleriet og kjøpesenteret med sammenfallende navn gir byens handlende borgere en unik lydopplevese midt i beste julerushet.
Kunstmuzak Inc. er Lydgalleriets aller første fellesprosjekt og kjøpesenterets nyinstallerte lydanlegg vil få en helt unik jomfrutur gjennom kunstmuzakens vidunderlige verden.
De siste månedene har det vært mye snakk om visuell forsøpling i offentlige rom og Bergen har nettopp gjort et bevisst valg om å minimere slik forsøpling. I skyggen av reklameplakatene finnes også en annen offentlighet nemlig den som har å gjøre med hvilke lyder som omgir oss til enhver tid. Lydgalleriet ønsker nå å sette fokus på våre lydlige omgivelser gjennom Galleriets nyinstallerte lydanlegg. Er det mulig å påvirke adferd ved hjelp av lyd og hvordan kan man legge til rette for lydomgivelser som gjør hverdagen mer behagelig? Er det mulig å bruke et kjøpesenter som visningssted for lydkunst?
Det er slike spørsmål Kunstmuzak Inc. – heismusikk med sjel skal forsøke å finne ut av.
Prosjektet er bygget opp rundt ulike former for lyd som på en eller annen måte tematiserer fenomenet omgivelseslyd- et tema som har vært hyppig behandlet opp gjennom forrige å rhundres musikkteori. Fra Erik Satie til det amerikanske selskapet Muzaks heismusikk og 90-tallets ambiente elektronika har musikk som lydteppe vært noe mange har interessert seg for. Bakgrunnsmusikk kaller man det ofte i en nedsettende vending og selv om mye av denne musikken tåler å lyttes til i mer konsentrerte omgivelser så vil vi undersøke hvordan den faktisk fungerer dersom vi bevisst prøver å gjøre den til nettopp bakgrunnsmusikk.
Vi presenterer eksempler på glattprodusert easy listening side om side med samtidsmusikk og nyere elektronisk musikk. Vi vil forsøke å pirke borti grensene for hva som passer inn i denne typen setting. hva skjer for eksempel hvis et stykke følelsesladet filmmusikk spilles av noen ganger på rad? Vil Galleriets gjester oppleve emosjonelle innsikter og haste avsted for å rydde opp i brokne relasjoner? Håpet er at det skal være mulig å finne frem til en balanse som fungerer som kilde til estetiske erfaringer uten at det går på bekostning av de faktiske ærender lytterne egentlig er ute på. I samarbeid med Galleriet ønsker vi velkommen til en annerledes handledag og en annerledes måte å høre grensesprengende sjangeroverskridende lydkunst.
Lydgalleriet er en nystartet kunstnergruppe bestående av personer med tilknytning til Ny Musikk Bergen Norske Billedkunstneres Forening Hordaland Bergen senter for Elektronisk Kunst KHiB Avd. Kunstakademiet og Universitetet i Bergen. Gruppen arbeider for å få bygget et galleri dedikert til lydkunst. I mellomtiden gjør gruppen diverse formidlingsprosjekter.
Tomorrow the exhibition Tracing Spaces opens at Bergen kunsthall featuring works by Doris Frohnapfel Karen Kipphoff & Trond Lossius Dan Mihaltianu & Jeremy Welsh
…might be the time for building installations from really big amounts of data.
Tomorrow the artists involved in the Tracing Spaces exhibition currently running at Bergen kunsthall will give a presentation to students at Kunsthøgskolen i Bergen.
New York based artist and composer Fast Forward is currently teaching as a guest lecturer at the Dept of Fine Art. On thursday evening Dec 1 at 8pm he will perform ‘Panasonic’ a collection of works for the Trinidadian steel pan. Far from the style of island music Forward’s compositions are minimal sequential and sometimes severe. The concert will be performed acoustically and will be his only concert in Scandanavia this year!
Today I’ve been in Oslo giving a presentation at Nasjonal forskerskole – tekst bilde lyd og rom a forum for PhD students in humanistic science. Among the other lecturers were Siri Meyer chairman of the board of the Programme for Research Fellowships in the Arts and Tone Hansen who entered the fellowship program at the same time as me. I presented some thoughts on art-based research and development based on my own project for the fellowship.
Later on Jon Bing gave a really interesting talk focusing on the relationship between art culture and technology. E.g. the “Golden Age” of writers in Norway (Ibsen Bjørnson etc.) happened at the same time that the steem engine came into widespread use as well as the invention of paper based on wood. Both were necsessary to be able to produce and transport books at a price that made it afordable to a wider public.
Tomorrow evening at 1900 Justin Bennett will give a self presentation at Landmark.