NewsWorksSoftwareTextBioContact
background image

Tests with very wide screen videos

June 10, 2010

Earlier on I have collaborated with Karen Kipphoff on two works, the installation Staged Bodies (Bergen kunsthall, 2005 and Høstutstillingen, Kunsnernes hus, 2006) and the stage work Floating characters (BiT Teatergarasjen, 2007).

Both of these projects used high-resolution wide-screen video projections. For Staged bodies the projection was created using two projections next to each other, with a total pixel resolution of 2048×407. A limitation of the system was a line at the verge between the two projectors. For Floating characters we developed the system furhter, adding edge-blending to create a seamless transition in the overlap region between the two projectors.

At the time being we did not find any standard video editing tools (such as Final Cut) that would enable us to work with the desired format. Instead the video playback system was developed using Max/Jitter with OpenGL GPU acceleration, wrapped up as a number of Jamoma modules.

20m_igloo_dome

The Igloo 360-degree digital projection environment.

 

In the fall we will be doing yet another (actual two) new projects as part of the ICCI 360 Festival in Plymouth. The festival will present an innovative programme of screen based multimedia and performance work from leading international 360 panoramic film makers, photographers and designers and many local and national artists and organisations.

The festival arena, consisting of a huge 20m diameter dome incorporating a 62m x 6m high projection screen (the size of 7 double decker buses!) and performance space, will use cutting edge digital technology with surround sound and high resolution projection facilities to provide a fantastic immersive 360 panoramic digital experience for audiences of all ages.

Inside_igloo

Inside the Igloo. From a previous screening.

 

The total format of the required video is quite impressive, using 5 video projectors for a total resolution of either 9600×1080 or 5120×768 pixels. Following from our previous work, we’d like to be able to deal with all five video projections as one canvas, rather than having to author 5 parallell and independent channels. This makes it much easier to work on images stretching across several projectors, or moving and rotating along the canvas.

Over the last two days I have been checking out possible solutions for how to develop the video material. The most obvious option that comes to mind is Final Cut. Although support for HD has improved a lot over the later years, it still doesn’t seem to quite cut it. From the tests I have done so far, the highest resolution of sequences seem to be 4096×4096 pixels, matching the 4K RAW format of the RED ONE camera.

I have also looked into Avid Media Composer. More work is to be done on this, but so far I have found no indications of support resolutions past standard HD (1980×1080). The upcoming v.5 seems to support RED as well.

Rendering_test

P=. Prototype for rendering to 9600×1080 in Max.

 

The solution I have a working prototype for, is Max and Jitter. Although computation is demanding, Jitter supports float32 9600×1080 matrixes, and I am able to record them using the jit.qt.record object.

Initially it seemed as if movies would not play back if the width is higher than 8192 pixels. In QuickTime Player the videos just shows up as being black, and for a while I was wondering if I was banging my head against a hard set resolution limit in either QuickTime or the codecs. However, using Max for playback as well, I am able to play back the videos, as illustrated below.

Playback_test

Prototype for video playback in Max.

 

This implies that it might be possible to generate video material in Max, record to one large movie, and later cut into 5 movies, one for each projector. The format is of course way to demanding to be able to deal with in real time, but using scripting of cues, it might be possible to set up a system that can be used to author the video sequence, and afterwards rendered.

It seems a very good idea to try doing something similar to “offline editing” in Final Cut:

Sometimes called the offline or story edit, this is the stage when a program’s creative content is explored, shaped, and eventually refined to the point where the cut can be locked in preparation for finishing. This process is referred to as an “offline” edit because, for disk space and performance reasons, media is often ingested at a significantly lower quality than the final master will be finished at. Although many editors still follow this workflow (especially for formats at high resolutions such as 2K and 4K that are processor-intensive and require a lot of disk space), it’s becoming increasingly common for programs to be ingested and edited at the final level of quality from the very start.

From Final Cut Studio Workflow

There are some major challenges that need to be resolved in order to be able to go down this route. Possibly the biggest is that I’ll need to be able to control all timing processes in Jamoma (in particular ramps and triggering of cues) using a non-real-time clock, so that time can progress in non-real-time according to the progression of frames. This has been on the TODO for Jamoma for several years (as I have needed it in past projects as well), but I have not so far looked into the SDK for timing objects in Max5, where anything to do with tempo was heavily revamped.

 

Update (2010-07-14):

Karol Kwiatek points out that Adobe After Effects is capable of dealing with videos of this resolution.