As Gary Glitter would have said: It’s good to be back.
I might be backloging on some stuff that’s observed found or read during the vacation or I might not.
As Gary Glitter would have said: It’s good to be back.
I might be backloging on some stuff that’s observed found or read during the vacation or I might not.
The Pd Repertory Project aims to provide patches that can run selected pieces of the live electronic music repertory using the Pd program. Included works:
Cycling74 has announced the Jitter 1.500 upgrade. Here is a list of new features.
At the same time there’s a free upgrade to Max 4.5.5 contains a number of new editing and productivity features.
I’ve just been setting up a Flickr account. This is one of the two first images uploaded taken at La Defense in Paris in January this year.
My photos will be available at http://www.flickr.com/photos/lossius/
The NIME05 proceedings are now online. A quick look at the Table of Content warrants interesting reading e.g. keynotes by Don Buchla on A History of Buchla’s Musical Instruments and Golan Levin on A Personal Chronology of Audiovisual Systems Research. There’s also a paper by Frederic Bevilacqua Remy Muller and Norbert Schnell on MnM: a Max/MSP mapping toolbox.
The U2 concert at Valle Hovin Oslo last week was my first encounter ever with the stadium rock live format. That was an interesting experience in several ways. The band was really good and it’s fascinating to see the machinery backing them up. According to an article in VG or Dagbladet each concert during the Vertigo tour costs approx 13 million NOK. That would be enough to keep BEK running for 5 years…
According to the press they used about 100 monitors. I counted them and it turned out to be 160 and another 20 or so further down the stadium. All monitors looked the same and as far as I could see they did not use dedicated subwoofers but I guess a massive 8×10 wall of monitors at each side of the stage ends up functioning as one massive clustered sub. I suppose that’s somewhat similar to what I experienced with the 16 loudspeakers in a row setup for Electrohype (on a much smaller scale though).
Apart from the four members of the band there were no other musicians at the stage (or elsewhere as far as I could tell). Everything seemed to be based on click tracks and arrangements were fairly faithful to the originals from the records with as little extra playback tracks as required. The upside of this was extremely clean tight and consistent sound for all of the songs regardless of originating from the first records or the latest ones. On the other hand it was slightly awkward to see all four band members with nothing to do for the intros of some of the songs as it was all playback of synths/sound processing. I can understand that they want to keep the stage “clean” but it somewhat reduced the live feeling of the music. It’s kind of pussling that some of the music stays automated this way at the same time as at least 30 persons are controlling lights and cameras for the visual part of the show.
Visually speaking the stage had a clean design: Two large video screens on top of the monitor rows and a matrix of lights inbetween forming the backdrop of the stage. My position wasn’t to far from the stage but the band members still looked like small puppies and I had to depend on the video screens to see what was going on. I guess that was the case for most of the audience. In a way it wouldn’t make much difference if they increased the number of audience from 40.000 to 160.000 by streaming sound and video to three more stadiums around the country.
In a way the video screens represented a visual magnification in the same way as the PA did for sound. But why is it that the visual amplification implied a much stronger alienation and detachment than the sound amplification? Is there a difference in this respect in how we percieve and interprete sound and image or is it a result of cultural training? Am I so used to amplified sound that it becomesa transparent transmitter to me? Or is the fact that the music itself is based on electric instruments reducing the distance? At the same time we’re also used to images from TV video and the net always indicating “somewhere else” in time and/or space.
Apart from that some gorgeous images was created as the cameas zoomed in at Bono in front of the light matrix. At the video the ligths transformed into urban landscapes at night.
The things I enjoyed the most about the concert was the impression of the band having another fine day at the office enjoying their work and also how they dared turning the focus away from themselves and towards more important issues at a point in the concert where they could have turned the crowd really ecstatic. Instead we got the human rights.
The article below points to a topic that Eno has been preoccupied with for a long while. It was discussed in an interview from 1980 and in A Year in a letter to James discussing various ways of extending the human voice as a musical instrument. I’ve always thought that this is something that really became important after having worked with Bowie in the 70s. E.g. Scary Monsters (Eno didn’t contribute to that record) and Outside are tours de force in this respect.
BBC JULY 28 2005
ROXY MUSIC’S ENO MAKES SOLO RETURN
Brian Eno – one of the most influential producers of his generation – has told the BBC how
the advent of new technology persuaded him to make his first solo vocal album for 15
Eno’s career dates back to the 1970s when he was a member of Roxy Music and he is
renowned as the creator of ambient music popular in documentaries and advertising.
He told BBC World Service’s The Music Biz that he “gave up” writing songs for a long period
after he “lost faith in the form” – period dating from the early 1990s until his new release
Another Day On Earth.
But recently he was inspired by new technology which enables voices to be used and
manipulated in a way that had not previously been possible.
He describes these “auto-tuning devices” as “altering the gender and the pitch of the
voice so that you could sound mlike a high-pitched female instead of a man with a cold -
which is what I actually am.”
“So suddenly singing started to become interesting to me again.”
“Instead of the singer being somebody presenting their own personality you could be like
a playwright who invents a character and then has them play a part.”
Eno who has collaborated with U2 Talking Heads and David Bowie in a career spanning
over 30 years said that working on a vocal album has shed light on how important the
instrumental role is.
“For me the recording studio is such an amazing invention " he said.
He believes most people do not realise how a studio works imagining that people “walk
into a room with lots of microphones and sing and play – and that’s how the record is
But Eno argues that this only happens with about 20% of records.
“They’re more like paintings – they’re made over a long period of time " he said.
“They’re not single performances and they’re made by people sticking things together
adjusting things and changing things.”
He also spoke about the “craft” of creating the record.
While the first song on the album was made in less than a day another was worked on
periodically over six years.
“In order to ever release anything you have to say to yourself ‘by that date I will have a
record out’ – otherwise I’d never release anything.”
“Sometimes you just happen to hit lucky and the ingredients all fall into place and you’ve
got enough excitement to carry the thing through " he said.
Eno has huge numbers of CDs containing tracks he has never released and estimates he
has only released four percent of everything he has recorded.
“I have a huge rejection rate – so when I die they’re going to have a party with all those
posthumous albums " said Eno.
The producer is now working with singer Grace Jones and Tony Allen [radiocitizen:
formerly with Fela Kuti’s band] a drummer who Eno described as “the most important
musician of the last 50 years” who “invented” Afro-beat.
He also believes Arabic pop music will be the “next reggae”.
“It will be the next outside-type music that sweeps the culture " he added. “Everyone will
be listening to it thinking it’s cool.”
For more information about this album from amazon.com go to…
I’ve spent last week updating to Tiger as well as changing e-mail client from Entourage to Mail. My hard disc was pretty crammed and I wanted to redo partitions so I backed everything up and did a clean install. Boring boring boring.
So far I’m not impressed neither by Mail or any of the supposedly great new features that’s supposed to sell Tiger.
Spotlight might eventually become useful but it has a number of serious flaws at the moment. Most notable and annoying:
As for Mail it might be that it takes a while getting used to. Here’s some annoyances so far:
The best thing about Mail so far is that if it crashes it remember what mails were open when started again. But then again I’d rather prefer it not to crash. Also the spell-checking works well.
I’ll give Mail a go for a while more to see if the impression improves. If not I hope that a simpler way of porting mails to Thunderbird will become available.
As for Dashboard/Widgets it basically boils down to having a lot of not very important applications running in the background taking up real and virtual memory estate. They are supposed not to do any processing when in the background but some of them still keeps running (check Activity Monitor). One of them ToDo Tracker also seems to have a serious memory leak. Having run for 2 days doing basically nothing it had chewed more than 2 Gb of virtual memory. Checking the Console Log it seemed that it was in debug state creating huge logs about nothing. When I first started using Tiger it felt very slow and non-responsive as compared to Panther. All of that improved substantially when I shut down that widget. A widget for quick notes and ToDos will be helpful but it’s not gonna be this one.
Once I got rid of the ToDO Tracker widget and Spotlight had been index all of my discs the core of Tiger itself seems to be good though. Having reduced the number of widgets to an absolute minimum (I’ve only got 512 Mb ram installed at the time being) it also feels a bit faster than Panther. And I’m looking forward to the ability to use more than one sound card at the same time.
I’m starting to think that macOSX no longer is able to handle Norwegian characters æøand å properly. They got screwed up when imported to the address book and I see the same thing happening in other progrems as well.
Just to end up on a grumpy line in mood with the rest of the post: I’ve been reading lots of reviews online on Tiger as well as the various applications and features of it. The most silly thing about a lot of Mac users is that they are more concerned about what it looks like than how it’s functioning. It’s really lame when reviewers start explaining how the scroll bar has changed color-wise from blue to grey to a bluish gray. What do I care?
I’ve missed a good and intuitive way of monitoring sound levels in multi-speaker setups for a very long while. Stacking level meters in different ways trying to resemble the actual setup has sometimes worked sometimes not. Today I’ve been creating a much better solution to the problem using Open GL in Jitter. Each loudspeaker is represented as a sphere. Position of each of the loudspeakers can be set freely using spherical coordinates. The color of the spheres indicate sound levels for each loudspeaker roughly indicating -60 dB to 0 dB by turning from dark grey to bright green on to yellow then orange and turning red in case of clipping.
The system is easy to configure and it’s easy to accomodate for varying numbers of loudspeakers in two and three dimentions.
Once we get past the 0.200 release of Jamoma we’re planning on creating modules for ambisonics spatialisation. This can easily be integrated as a Jamoma module.
I’ll be using various multi-speaker setups for two installations later this year at Verftet USF by the end of September in collaboration with Kurt Ralske and at Bergen Kunsthall by the end of November in collaboration with Karen Kipphoff.
In case anyone you want to give it a go you can download the patches as multispeaker_levels.txt and one_loudspeaker.txt. They were made in Max 4.5.5 Jitter 1.500 and I’m also depending on the Lobjects by Peter Elsea. They might work with earlier versions of Max/Jitter as well but I haven’t tested.
Update: A ziped archive can be found here.
pmpd is a collection of object for with pd. These objects provide real-time simulations specially physical behaviors. pmpd can be used to create natural dynamic systems like a bouncing ball string movement Brownian movement chaos fluid dynamics sand gravitation and more. It can also be used to create displacements thus allowing a completely dynamic approach of pd computing.
With pmpd physical dynamics can be modelled without knowing the global equation of the movement. Only the cause of the movement and the involved structure are needed for the simulation. pmpd provides the basic objects for this kind of simulation. Assembling them allows the creation of a very large variety of dynamic systems.
These object are designed to be used with pd (pd). pd is a real time graphical programming environment dedicated for audio signal processing. Pd allow the creation of object and it is well adapted of the creation of particular physical modelling. GEM is a pd library dedicated to images processing. In the provided pmpd examples GEM is used for the movement visualisation.
I haven’t tested yet but Pdmp seems to be able to work on physical modelling in ways that might resemble Genesis. Genesis is currently not publicly available and AFAIK only runs on Silicon Graphics. The Italian composer Giuseppe Gavazza is using Genesis. He had a recidency at USF in bergen a few years ago and is back for another recidency right at the moment. He played me some music and sound last time he was visiting Bergen and that stunning.
I’ve never used artists sketchbooks. Back when I was studying composition I would have a pile of loose sheets at the top of the piano with random notes. At (ir)regular intervals when I was tidying up the room I would just throw it all away. Maybe save two or three notes but they would most likely be trashed the next time. Instead I’ve been depending on my imagination and memory.
I believe I have a strong long term memory concerning what I’m working on and what I’ve been working on and thinking about in the past. That was trained while I was doing the master thesis in geophysics. That was the time I got more and more involved with music. Gradually I spent less and less time on my thesis and more and more time playing and composing. I trained myself to remember exactly where I left off on the work on the thesis so that next time I could pick up where I left off without having to spend time refreshing my memory. The master thesis stretched out in time and instead of the expected 1 1/2 years I ended up using 4 years. At one point I had a 3/4 year break doing nothing but music then walked in sat down and continued to debug and test the subroutine that I’d been struggling with last fall.
My imagination was trained studying counter-point and orchestration. I read a book on counterpoint that emphasized the importance of being able to imagine the cantus firmus mould it in memory and then write it down. Similarly when it progressed to first species counter-point second species and so on it kept challenging me to as far as possible be able to crate all of the counterpoint without the aid of pen and paper. Orchestration in a similar fashion was useful for training the memory of what sounds are produced by various combinations of instruments and an imagination of how new combinations would sound. For a composer this is an absolute prerequisite. You don’t have day-to-day access to musicians and in general you have to do the whole composition without being able to properly listen to it along the way and adjust. A strong imagination hence becomes vital to avoid major disappointments when you’re finally able to test it with musicians generally shortly before it’s going to be performed in public.
As I turned from composing for musicians to work on electronic sound I’ve kept working along these lines. I’m quite dependent on listening to the sounds I’m producing but at the same time those sounds tend to be very rough and not pleasing early in the development of a new work. I don’t like presenting it to others in particular the artists I’m collaborating with at a too early stage. While I might be able to imagine what kind of potential the sounds have for further development and refinement they generally tend to sound dull boring and disappointing to others.
The early tryout phase of a project tends to produce a bunch of spaghetti Max patches while testing new techniques. But the closest I get to sketchbooks and the only ones that I’m ever going back to are this blog and the e-mail in- and sent mail boxes.
I’ve just uploaded a new version of tl.objects a bunch of MaxMSP/Jitter externals aimed at synthesis and whatever else I might be doing.
This version depends on Max 4.500 in particular for a number of features of the help files. Apart from that the most important change is the addition of a bunch of new abstractions:
Iterating up and down and a metro:
Keyboard as toggles
Some utils for Jitter
Tuning and detuning:
Vibrato and 1/f drift in MIDI frequency and signal domain:
Tremolo and FM:
Display of sound signal:
If you experience any problems with the objects please let me know.
Update: Wesley Smith posted a really nice solution for extending tl.jit.info so that it can display info for any matrix in the patch. I’ll see if I can roll this into the distro over the next few days.
No new features but a few fixes to issues I’ve discovered.
Robert Moog passed away yesterday.
Fans and friends can direct their sympathies or remembrances to www.caringbridge.com/visit/bobmoog.
numusic is a contemporary music festival happening in Stavanger August 24.-28. The program looks really interesting including symposium by David Toop and Karlheinz Stockhausen presentiung a number of works including Gesang der Jünglinge Kontakte and Hymnen.
September 19 there will be a one day seminar on sound art in Oslo. The topic can roughly be translated as “sound in contemporary art – art in contemporary music”.
I’d really like to attend but my schedule don’t permit it. But if you happen to be in Oslo I believe it will be well worth visiting.
SEMINAR OM LYDKUNST
Lyd i samtidskunsten – kunst i samtidsmusikken
HØSTUTSTILLINGEN KUNSTNERNES HUS
MANDAG 19.000 SEPTEMBER 2005 KL 16.15-20.00
Lydkunst utgjør en stadig større del av norsk samtidskunstproduksjon. I dette seminaret ønsker vi å gå nærmere inn på hva lydkunst er og kan være. Dette vil bli drøftet gjennom foredrag og verkspresentasjoner av representanter fra ulike fagmiljøer.
PÅ MELDNING pr mail til firstname.lastname@example.org innen fredag 16.000 september. For nærmere informasjon kontakt Nina Schjønsby tlf 41237923.
Inngang kr 150 / studenter og medlemmer av NBK kr 100 (inkludert enkel servering).
TILMAN HARTENSTEIN har vært programskaper i NRK P2 siden slutten av 1980-tallet og har i flere år arbeidet med lydkunstprosjekter og holdt forelesninger om lydestetikk og lydkunsthistorie.
STEINAR SEKKINGSTAD er kunsthistoriker. Han arbeider for tiden som universitetslektor ved institutt for kulturstudier og kunsthistorie ved Universitetet i Bergen.
BJARNE KVINNSLAND er komponist og musiker og arbeider som produsent på NOTAM. Han har jobbet mye med musikk til dans film teater og radio. I de siste årene har han arbeidet med ulike utsmykkingsoppdrag med lyd og lys.
YNGVILD FÆRØY OG SØSSA JØRGENSEN er billedkunstnere og kuratorer. Sammen startet de i 1996 BallongMagasinet et formidlingsprosjekt for hørbar kunst. BallongMagasinet var opprinnelig et ukentlig radioprogram for lydkunst og finnes nå som et lydgalleri på nettet.
Seminaret er støttet av Norsk kulturråd.
In 1997 my wife got a two year contract as a fishery biologist in Namibia. I was still studying composition at the Grieg Academy. I stayed behind for one semester and arrived in Luderitz by Christmas at the end of 1997.
Up until then I’d mainly been writing for instruments and I had no experience with computer music. Purchasing a piano in Luderitz seemed out of the question being a small fishing village at the Sceleton Coast and far away from everywhere. But I needed something to do and I wanted to continue working on music so my wife spent most of her sign-on fee meant to cover extra expenses on a Mac 8600/200 a synth MIDI controller and a bunch of software for me.
My composition teacher back then Morten Eide Pedersen (he is also a supervisor for my current fellowship) suggested that a program named Max might be of interest to me. I had never heard of it before and didn’t have a clue towards what it was all about. From time to time when I’ve received suggestions pointing in completely new directions I’ve been jumping at it just to get off-balance hoping that it will bring me somewhere new somewhere I didn’t know of beforehand. This was one of those times.
Composing was difficult and frustrating to me at that time I could have a lot of very short ideas and sketches but as soon as I sat in front of a piece of sheet paper I blanked out. In addition to Max I was also bringing along Finale (I knew that one from beforehand) and Opcode Vision a sequencer. Neither Finale nor Vision appealed to me. They shared some of the same assumptions: You are about to create a linear piece of music spanning time from point A to B most likely marching along at 4/4 and 120 bpm. My ideas didn’t really want to go from A to B the theme at A seemed perfectly happy as is and didn’t have any desire of moving anywhere else. It had no story to tell no drama with opposing characters no desire for transformation confrontation and resolution..
I had been obsessed with the music of Brian Eno for at least 10 years already. Although I started listening to his early rock albums in particular Taking Tiger Mountain by Strategy and Another Green World I soon got preoccupied with his ambient records not only listening to them and enjoying it but also absorbing the philosophy behind suggesting a different aesthetic and a different way of creating thinking of using and listen to music. Via Eno the work and thinking of Erik Satie also became very important to me.
I don’t know if it was a coincident or a sign of predisposition but the punk band I played in a few years earlier was named Muzak. Now I was listening to Ambient or “intelligent” muzak.
When I started studying composition I had a clear “project” defined to myself but it was difficult to fit that in with the expectations I felt as a student. The first composition teacher the Danish composer Mogens Christensen never really asked why I was studying composition what I’d been doing before what I wanted to do or why. I felt strong expectations towards composing complex contemporary music and struggled to be able to create something that I could relate to myself. On the other hand Mogens was really good at the craftsmanship bit so I chose to emphasize that digging deep into renaissance vocal polyphony according to the style of Palestrina harmonization and orchestration.
I learned a lot from that but concerning composition itself after two years I was at a loss. At that time Mogens moved back to Denmark and Morten became my new composition teacher. He waited patiently for a year while I struggled to sort myself out. Slowly I started writing again writing two movements for string quartet.
In spring 1997 all of the composition students went to Paris for a one week study trip. During that week we visited Ircam attended a lot of concerts and managed to get a class with the late Gerald Grisey. That turned out to be a very good thing for me. He did not like what I did but he understood what I aimed at probably better than I did myself. To me that confirmation of a message coming thru was important. He also asked a question that was very enlightening to me and that I’m still asking myself. When he looked at/listened to the movements for the string quartet he said that I seemed to pull in two opposite directions at the same time at one hand aim for a strong degree of simplicity and at the same time also striving for complexity. He asked me what i really wanted to do and which of these two directions I really preferred.
I was still very confused by the time I moved to Namibia. I started out looking at all the programs I had brought with me in addition to struggle to set up a working internet connection connecting over long distance on a bad line to Windhoek. I was new to Mac back then owned the only Mac in Luderitz and ended up driving to Windhoek a little more than 800 km one way in order to get assistance.
Finale and Vision did not appeal to me once I stood in front of a timeline I felt lost. Max was something different. This was like playing Lego again. I quickly realized that in order to be able to use it I would have to learn it thoroughly so that my ideas wouldn’t vaporize while I was searching the manual looking for an object to do the job.
I created my first piece of music based on Max in April-May 1998.000 By then I had had enough time and silence to really ask myself why I started composing in the first place and came to the conclusion that I had to go back to the ideas I’d had before I started studying and persuade them until I either were thru with it or else keep working on it for the rest of my life.
My first Max piece was a revelation to me. The technique behind it was fairly simple all MIDI based and basically using the “metro” object to get something to happen at irregular intervals (by randomly changing the interval). But the change in the way of thinking was tremendous to me. Instead of walking along a line from A to B deciding step by step what’s supposed to happen I could now approach the music from a birds perspective rather decide how often something was supposed to happen. The composition no longer was a line segment but an infinite line hypothetically stretching infinitively far in both directions past the two points A and B. This became my first music for installation and the first piece of electronic music. It was never presented in any real setting but I thought of it as being installed at the ghost town of Kolmanskop 10 km into the Namib desert from Luderitz. I named it “texture I” assuming that there where more to come. I later found out that Eno had released a limited edition CD in 1989 named the same.
As I kept working on installation music I developed some analogies that I used to explain to myself what kind of impact I wanted them to have.
In the same way as a painting I wanted the music to reveal what it consisted of at the first glimpse. You shouldn’t have to spend 20 5 or 2 minutes in order to hear what components makes up the sound image. Instead of a horizontal listening I aimed for a vertical listening. The act of continue to listen should be similar to continue to stand in front of a painting looking deeper into it rather than seeing a story spanning time unfold in front of you.
This did not mean that I wanted the music to be totally static. I wanted to have a certain drift variations at various time scales. But all of these should happen within a certain range and you shouldn’t have to spend long to realize the size of the space spanned by these drifting parameters. 2/1 from “Music for Airports” is a very effective and minimal example in this respect. I likened this slow drift to the principle of Erik Saties of composing works in groups of three like viewing the same sculpture from different angles. Another way I explained this to myself was by thinking of the real composition as an idea in a platonic sense and the actual sound at any one time being a shadow of that idea.
Yet another analogy seemed important to me. Earlier on I had been walking a lot in the Sandviken mountain close to where I grew up. Some places I came back to again and again and I knew them by heart. I’d see them changing due to changing wind and clouds while I was there and change with the season and weather from one time to the next. I wanted to create a similar sonic environment that you could visit and revisit. I wanted those places to have a similar drift subtle fluctuations at various time scales a kind of artificial nature free from traces of human influence. For a long time I have wanted to use meteorological data for my installations.
As I turned from composing sheet music to work on music for installations I felt that I had to develop a new and different craftsmanship. Instead of meticulously deciding which notes where to stand next to each other bar by bar as in counterpoint study or harmonization I had to develop compositional techniques working at a meta level composing the algorithms that would generate the actual sound. Creating a piece of music of a fixed duration and looping it was never an option and I’ve always been bored with video installations working this way.
How to approach the generative processes is an ongoing research.
The Generator.x project is a conference exhibition and weblog examining the role of software and generative strategies in current digital art and design.
When my Mac titanium laptop gets really old I’m going to dismantle the screen pick up a knife and do some serious modification so that the apple becomes a Warhol banana. Then I’ll hit the road as a laptop musician.
With an annual increase of 58 percents Africa is the world’s fastest-growing cellphone market according to NY Times. Check out the slide show.
The Intergovernmental Panel on Climate Change (IPPC)
IPCC was established in 1988 by the World Meteorological Organization (WMO) and the United Nations Environment Programme (UNEP). The role of the IPCC is to assess on a comprehensive objective open and transparent basis the scientific technical and socio-economic information relevant to understanding the scientific basis of risk of human-induced climate change its potential impacts and options for adaptation and mitigation. The IPCC does not carry out research nor does it monitor climate related data or other relevant parameters. It bases its assessment mainly on peer reviewed and published scientific/technical literature.
In 1992 IPCC released six emissions scenarios providing alternative emissions trajectories spanning the years 1990 through 2100 for greenhouse-related gases referred to as the IS92 scenarios. These scenarios were intended for use by atmospheric and climate scientists in the preparation of scenarios of atmospheric composition and climate change. In many ways the IS92 scenarios were pathbreaking. They were the first global scenarios to provide estimates of the full suite of greenhouse gases.
In 1995 the IPCC 1992 scenarios were evaluated. The evaluation recommended that significant changes (since 1992) in the understanding of driving forces of emissions and methodologies should be addressed. These changes in understanding relate to e.g. the carbon intensity of energy supply the income gap between developed and developing countries and to sulfur emissions. This led to a decision by the IPCC Plenary in 1996 to develop a new set of scenarios. The new scenarios where published in 2000 and now form the basis for world-wide research on long-term climate change and the possible consequences of it.
Future greenhouse gas (GHG) emissions are the product of very complex dynamic systems determined by driving forces such as demographic development socio-economic development and technological change. Their future evolution is highly uncertain. Scenarios are alternative images of how the future might unfold and are an appropriate tool with which to analyze how driving forces may influence future emission outcomes and to assess the associated uncertainties. They assist in climate change analysis including climate modeling and the assessment of impacts adaptation and mitigation. The possibility that any single emissions path will occur as described in scenarios is highly uncertain.
While creating the scenarios four different narrative storylines were developed to describe consistently the relationships between emission driving forces and their evolution and add context for the scenario quantification. Each storyline represents different demographic social economic technological and environmental developments which may be viewed positively by some people and negatively by others.
The four storylines are labeled A1 A2 B1 and B2:
The A1 storyline is further divided into three groups A1FI (fossil fuel intensive) A1B (balanced) and A1T (predominantly non-fossil fuel). From this total of 3 + 3 = 6 groups a total of 40 scenarios have been developed.
Bjerknes Center for Climate Research
The Bjerknes Centre for Climate Research (BCCR) situated in Bergen is the largest climate research group in Norway. BCC is a joint climate research venture between the University of Bergen (UoB) the Institute of Marine Research (IMR) and the Nansen Environmental and Remote Sensing Center (NERSC). BCCR is named in honour of Vilhelm and Jacob Bjerknes pioneers of modern meteorology.
Research at BCCR have a particular emphasis on the climate sensitivity and variability of high latitude regions and the role of the oceans in the climate system. The Bergen Climate model (BCM) is an important tool in this research. BCM is a global climate model that provides simulations of the Earth’s past present and future climate. BCM is a fully coupled atmosphere-sea ice-ocean general circulation model whose components can be run in a stretched orthogonal grid system mode particularly suited to describe the major atmosphere-sea and ice-ocean features of a chosen region of the high-latitudes. BCM is used for research in a number av areas model development climate variability and interactions climate sensitivity studies paleoclimate polar climate and climate change and assessment. Climate change simulations includes historical simulation of 1870 to present climate studies of present climate and various scenarios for the future including 21st Century IPCC scenarios.
This reseach is still in progress. Some simulation have been run others are still in be done. The three IPCC SRES scenarios that they do have data for at the moment are SREAS A1B SRES A2 and SRES B1. Although the model iterate in steps of 30 min. the data amount if all of this was to be stored are so large that only more compressed data sets are stored for further studies: montly-means values for a large number of parameters as well as daily mean values for selected periods (2046 – 2065 2081-2100). for these periods 6 hour resolution data is stored as well but htis has to be rolled from tape scripts be made to retrieve it etc. so they are not as easily accessible.
I’ve been granted access to some of their data and for Generator.x I’ll be climate data from 2046 – 2065 as input to drive the algorithms generating the sound.
The Generator.x project is a conference exhibition and weblog examining the role of software and generative strategies in current digital art and design.