Interaktiver Pavillon

 

The interactive pavilion is a public installation in Dresden, Germany, created by the Trans-Media-Akademie Hellerau. It plays interactive audio controlled by the visitor. It has been running different artists audio setups but right now it's mine, and I've promised some people a basic documentation so here we go!

A camera is mounted in the top of the pavilion, looking down at the floor. A computer analyses the picture using a program called EyeCon, which is then passes OSC-messages on to a audio program that created the sounds. Usually Max/MSP but in my case Reaktor 5.

The sound is basically four elements: A crackling noise, a piano like synth, a pad like synth and the drums.

Looking at the EyeCon Screenshot you can see that there are four lines defined, three of them trigger the piano sound when being crossed (%Trig). Notes are selected by where you touch the line (%Pos), I let Reaktor filter it to a phrygian scale so that it always makes musical sense. The fourth line chooses the note for the pad sound, but it doesn't trigger it as the others do.

There are two field-definitions. The one called ChaOsc plays the pad synth depending on how much activity goes on in this area which covers half the floor.

The other field covers the whole floor, this one handles the crackling noise and the drums, and it sends the objects X/Y position but also a "size" value. This is a number that tells how much of the area is being covered, and is used to simply see if anyone is there, to switch the noise sound on/off. X controls the filters character and Y sets the level.

The X value is also used for the drums, though it only uses half of the floor for this (so that one half is pad synth, the other is drums). I use a sample player in reaktor called Splitter for this. The incoming number is scaled to 1-16, and every time it passes an integer it triggers a sound according to that number, while the decimals inbetween affect different settings like grain size or pitch. (in Reaktor terms this means that it's separated by a modulo module, the Div output selects and triggers sounds, while the Mod output goes to wherever the LFO used to go). The triggering is also time quantized using a clock, so that it becomes a musical rhythm. I also used to run this through a nice rhytmic grain delay but that became too much for the CPU.

When I set this up at home I was using a simple webcamera and my P4 2.4GHz machine, which worked fine except for a minor latency. At the Pavilion there is a more powerful computer but with a PCI framegrabber card, (with higher resolution and no latency at all) that is obviuosly heavier for the machine. This is also why Reaktor is running at 22 khz to avoid pops and clicks caused by CPU spikes. Usually they use a separate computer just for EyeCon but not for the moment.

 

 

 

 

Dance performance
Emelie Bardon

Sound
Christian Björklund

Reaktor user library content used

3X (James Walker-Hall)
ChaOsc (Martin Brinkmann)
NoiseToy (Kristian Thom)
bubuMapper (Laurent Veliscek)

Screenshots EyeCon

Screenshot Reaktor

Site pictures (Konrad Behr)