James Alliban's Bipolar presents an audiovisual mirror that generates a soundscape from the presence and motion of its subjects. The resulting audio data (including that from the participant) is used to transform the body into a distorted portrait that fluctuates between states of chaos and order. The video also does a tremendously effective job of capturing the subjects' endless fascination with the installation, providing instant gratification and a sense of endless possibilities.
How did you come up with the idea for this video?
This piece started out as an experiment. I was speaking at a design festival and was asked (along with the other speakers) to put together a short “thanks for coming” video to play at the end of the event. I wanted to do something a little more interesting than simply speaking into my smartphone. Originally I planned to make a face-tracking application that warped and glitched up my face in response to my speech. Over time I changed this to a whole body experience using the Kinect camera.
I came across this effect that seemed so dramatic that I decided to investigate further and publish Bipolar (so named because of the constant fluctuations between states of order and chaos) in the form of a short video. People started becoming interested in exhibiting the piece so I modified it to become an installation. The project became a collaborative effort when Liam Paton from Silent Studios added an interactive sound aspect. It has been exhibited at several events and exhibitions since.
We love to geek out, so indulge us, what are we looking at here?
The basic effect isn’t too involved. I use openFrameworks which is a C++ arts based programming toolkit. Around 30 times per second the depth data and video feed from an Xbox Kinect camera are combined to create a 3D model of the visitors. I use the sound data coming in from the microphone to extrude every second point in the 3D model. The points are extruded in the direction they are facing to provide a spacial aesthetic that isn’t really possible with the data from a standard camera.
Beyond this it gets quite involved. There is a great deal more going on behind the scenes. I utilise the power of the graphics card to smooth the 3D model and calculate a bunch of data that speeds up the application and generally improves the look of the piece. I have a user interface within the app with about 15-20 pages of buttons and sliders that allow me to tweak the piece until I'm satisfied. In terms of the sound, the participant's motion is calculated and sent to a separate piece of software built in Max/MSP by Liam. He then creates the twisted soundscape based on the amount of activity and several other factors such as location and proximity. This audio is then picked up by the microphone along with any other sounds in the environment and visualised on the body of the subject.
What’s been the most satisfying aspect, for you, about this particular piece?
While the learning process and the act of discovery and serendipity were very enjoyable, the best part of this project was the final result. Watching people discover and interact with the installation for the first time is a wonderful experience, one that I enjoy for every piece I offer to the public. Anonymously standing at the back of the room and watching people enjoy an unexpected experience that I’ve worked hard to create is always very rewarding.
What’s the last great thing you read, saw or heard?
I went to see the University of Virginia's’s new piece "Momentum" at the Barbican’s Curve gallery recently. It consists of a series of 12 mechanical light pendulums that swing uniformly in the dark space. The effect was very contemplative and, for me at least, slightly eerie due the unnatural behaviour of these slow moving spotlights.
I’m speaking to a couple of curators about exhibiting Bipolar, and meanwhile I’m continuing to explore new ways to represent the body through interactivity. I’m working on several projects at the moment, a couple of which are about to drop soon so keep an eye out.