One of the things I love about the Christmas holiday is that I am often able to set a few days aside for larger tasks that I don’t manage to find the time for during the rest of the year. This often serves as long-term investments, efforts that do not address an imminent deadline, and that do not necessarily pay of immediately. It might be reading up on topics, developing or learning new software, or something else.
This year I have been revamping my studio. The sound absorbers that have been laying around for years are finally up on the walls, and I have set up a 12 speaker 3D ambisonics rig. I still have some calibration to do before I am all set to go, but this definitively feels good.
One of the nice things about developing open-source software is seeing where it ends up being used. When ambisonics gets used in advertisements for wild gorilla preservation lodges in Uganda, it’s clearly becoming accessible beyond the electro-acoustic music camp.
The Amsterdam-based Spook.fm music and audio production company has made SpookSync3D, a utiity for syncing view direction and play information between the 360 video player Kolor Eyes and Reaper ambisonic projects using Ambisonic Toolkit for Reaper. The above video gives a clear and thorough presentation of the workflow when designing ambisonic sound for 360 virtual reality using Reraper and Ambisonic Toolkit.
Spook.fm has collaborated on several projects with VR Gorilla, another Amsterdam based studio specialising in virtual reality production. One of the projects they have worked on, the Virtual Gorilla app for Android, is available in the Google Play Store.
I’ve been experimenting lately with what might be the best way to do noise reduction (denoising) of ambisonic field recordings. In some of the recordings from very quiet locations preramp noise is noticeable. I’m not interested in reducing any of the background noise that is part of the recorded ambience itself, I just want to reduce the amount of hiss introduced in the process of recording.
The recordings are done using a SoundField SPS200 mic and a SoundDevice 788T recorder. The resulting recordings are A-format, and I use the SoundField SPS200 plugin to convert the A-format recordings to B-format. Denoising is done using iZotope RX. RX4 is not able to deal with multichannel files, so I have to split the 4-channel recording into two stereo tracks, do noise cancellation on them, and then merge back into four channels again. However there are at least three different approaches to this:
Treat the original A-format recordings, and afterwards convert the processed files to B-format using the SPS200 plug.
Treat the converted B-format recordings.
Decode the B-format files to A-format using the BtoA decoder from Ambisonic Toolkit for Reaper, remove noise, merge, and then re-encode to B-format using the ATK AtoB encoder.
I have done recordings of the SoundDevice at various input levels with no mic connected. The resulting hiss is very similar to the noise that I want to remove, and I use these recordings to train the Denoiser. The resulting curves are stored as a set of presets.
After having experimented over several days, and a lot of useful input from the sursound list, it seems to me that processing of the B-format signals alter some of the spatial information. The resulting spatial image appears to me to be more unstable, and less spatially articulated. If the Denoiser algorithm do alter phase I suspect that it might also impact the original A-format recordings negatively. So I am ending up with the last option above.
One benefit of this is that I can do it all in Reaper, and I don’t have to manually split and merge sound files. It is still a laborious process: At my 4 year old laptop rendering is happening at 1.1 x realtime speed, so there is a lot of waiting.
This blog post was done while waiting for a rendering that took close to 50 minutes to complete. Duh!
In January 2016 the interactive vocal installation ”The sound in-between” by Line Horneland can be experienced in Sola ruinkirke, a mediaevel church outside Stavanger. The project combines vocal soundscapes and spatiality.
Sound is central to human life, and the aspect of sound is implicit in spatiality. The project thematizes space and sound in an art context, as a place for the listener’s prereflective experiences.
The installation is part of a an ongoing master project at Stord/Haugesund University College, withy me serving as external supervisor.
The installation can be experienced in week 2, 2016:
In a few hours an exhibition with works by Jon Arne Mogstad, Jeremy Welsh and myself opens at Heimdal kunstforening in Trondheim. If you are around, please join us! The exhibition will be on through February 28.
Lørdag 16.01.16 kl. 1300 har Heimdal Kunstforening gleden av å invitere til åpning av utstilling med Kunstnergruppen LMW: Trond Lossius, Jon Arne Mogstad, Jeremy Welsh. Tre forskjellige kunstnere som arbeider i tre ulike medier: Video, maleri og lyd.
Kunstnergruppen LMW ble dannet i 2004 med en utstilling på Kristiansands Kunsthall. Utstillingen, Dialog/Overgang, ble første i en serie som varte til 2011, med visninger på gallerier, museer, teater og festivaler. Som tittelen sier, handlet hele prosjektet om dialog mellom tre kunstnere med tre ulike former for kunstnerisk praksis, og om utforsking av overganger mellom disse. Dialog/Overgang serien bestod av installasjoner med en blanding av maleri, foto, video, digitalbilder, silketrykk, lys og lyd. Siste prosjekt i serien ble presentert under utstillingen BGO1 på Bergen kunstmuseum KODE i 2010-11. En av utstillingene i serien fant sted på Trondheim Kunstforening i 2005.
Utstillingen på Heimdal Kunstforening er ikke en videreføring av Dialog/Overgang men allikevel er det en ny dialog mellom kunstnerne. Etter flere år med arbeid på store offentlige utsmykninger, har Jon Arne Mogstad brukt noen måneder på atelier for å lage en serie med store malerier der han utforsker tekstilfarge som medium. Materialets transparens og flyttende kvalitet skaper nye muligheter for en videre utvikling av abstrakt maleriets formspråk og bygger på Mogstads mangeårig praksis som maler.
Jeremy Welsh har for anledning laget en kort digitalanimasjon med tittelen New Rose basert på disse malerier til Jon Arne Mogstad. Videoen er både kommentar og undersøkelse av malerienes detaljrik overflater, og er samtidig en lenke tilbake til begynnelsen av Dialog/Overgang, hvor Welsh laget en animasjon som denne gangen ble prosjissert direkte på en av Mogstad’s malerier.
The Atmospherics er et pågående prosjekt av Lossius og Welsh, basert på feltopptak i landskapet. Lyden er tatt opp med en ambisonic mikrofon, som kaprer lyd samtidig fra alle himmelretninger og danner grunnlaget for å skape installasjoner med surround lyd. Videoen er spillt inn med høyoppløsning (4K) kamera, som kaprer mye detalje og subtile nyanser. Materialet i både lyd og bilde er bearbeidet under redigering for å skape installasjoner som er assosiative og reflekterende heller en dokumentariske. Begge kunstnere er veldig opptatt av samtidsmusikk, og det kan sies at dette felles arbeid er en form for komposisjon. The Atmospherics har så langt blitt presentert på flere utstillingssteder, inkludert Bomuldsfabriken Kunsthall, Arendal og Galleri 3,14 i Bergen. Senere i 2016 blir den største installasjon så langt stillt ut på Kunstmuseum i Sogn og Fjordane, Førde. Prosjektet har vært støttet av Norsk Kulturråd, Billedkunstnernes Vederlagsfond, Bergen Kommune og Trondheim Kommune.
Fra 2004 – 2009 var alle tre kunstnere tilknyttet Kunst og Designhøgskolen i Bergen. Nå for tiden er Mogstad og Welsh professorer på Kunstakademiet i Trondheim NTNU og Trond Lossius er forskningsleder på Bergen Senter for Elektronisk Kunst.
Yesterday I went to Øystese to set up a new installaiton version of Muelheim an der Ruhr, August 2013 for Vestlandsutstillingen. This time it is presented as a living room installation. Over the next half year it will be touring Kabuso in Øystese, Sogn og Fjordane kunstmuseum in Førde, Kube in Ålesund and Kunsthall Stavanger.
We’ve had the most gorgeous winter weather for the past few weeks, and the drive to Øystese was was really beautiful.
And just to make the day perfect, Kabuso was 10 years yesterday. That was duly celebrated.
For both of my two recent installations, at Heimdal kunstforening (opened last Saturday) and Vestlandsutstillingen (opens tomorrow), I use surround sound synced to videos.
Initially I was looking into possibilities for using BluRay for playback, but it turned out that BluRay authoring is difficult, expensive, and with tight deadlines possibly risky, in particular when going beyond 5.1 surround. Authoring 7.1 surround would require the rather expensive DTS-HD Master Audio Suite for lossless encoding of audio, as well as the discontinued Adobe Encore program, and then it might be psosible to do — or not.
Instead I ended up using Mac minis for playback. It turns out that QuickTime supports surround sound with up to 24 channels. I did a bit of googling to figure out how to do this, and it seems to work out well. In the above screencast I show how to go about making a video with 6 channels (L – R – SL – SR – SBL – SBR), and set up OSX to play them back properly.
Thanks to Gisle Tveito for getting me on the right track!
by carte blanche is a performance created by the whole company. By removing the choreographer, the ensemble challenges the practice and tradition of the contemporary dance performance and open up for new interpretations of both concept and aesthetics.
Unfortunately the Ambisonic Toolkit website has been down for several weeks. This is due to server problems that have turned out to be hard to resolve. Hopefully it will be fixed in the not to far future.
In the meantime, direct links to the latest version (1.0.0.b6) can be found here: