The videoprocessing I’m working on for the upcoming installation at Hordaland kunstsenter turned out to be to computational expensive resulting in poor framerate. It seems that the main bottleneck is upsampling images from low resolution (e.g. 80×60 pixels) to higher resolution with interpolation using Jitter jit.matrix. In addition I had problems finding good ways of doing transitions from one video texture to another using using the approach initially planned.
Testing various alternative approaches it turned out te be a lot better to do this by mapping matrixes onto OpenGL video planes. Even on my close to four years old 1 Ghz TiBook with a fairly poor graphics card it seems to take advantage of hardware acceleration making it a lot faster and it is also a lot easier to crossfade between images of different resolution in a convincing way. Much of today have been spent developing a Jamoma module for mapping incoming video as texture onto a OpenGL videoplane using jit.gl.videoplane.
Many of the Jitter OpenGL objects share the same attributes and messages so I have started implemented all of the shared ones as an inspector so that they can be easily available to all OpenGL objects.
I end up with the same kind of tweeking of material and technical solutions as for the Staged Bodies installation in order to be able to get a image quality as close to what I can as possible. Life on the bleeeding edge…