Vantage Points

February 24, 2014 in Electronic, Music, Video, Visual

One hour performance of 6 screen fixed media surround projection and 10.1 realtime surround sound. All video and music created and performed by Cole D. Ingraham. The work is divided unto 8 sections:

1. Draw and Erase (0:00)
2. Line Intersections (6:35)
3. Phase Web I (16:05)
4. Moments of Symmetry (20:30)
5. Phase Web II (36:31)
6. Jitter (39:25)
7. Organic Cage (45:25)
8. Phase Web I (56:26)

Performed October 28,29 2011 at the University of Colorado’s ATLAS black box theater.

Vantage Points from Cole Ingraham on Vimeo.

Post to Twitter Post to Facebook Post to Google Buzz Send Gmail


August 30, 2013 in Electronic, Music, Video, Visual

An exploration of parallel gradual processes in audio and video, focusing on density and black/white balance (notan).
Video created with Processing. Music created with SuperCollider.

Duality from Cole Ingraham on Vimeo.

Post to Twitter Post to Facebook Post to Google Buzz Send Gmail


February 15, 2013 in Electronic, Music, Video, Visual

An hour long audio/visual work created and performed by Cole Ingraham. The original video is 4 screen 1080p and this version maintains the same aspect ratio which is why this version is so narrow. Video created using Processing. Music is a combination of SuperCollider, Moog Guitar, and Roland VG-99. Premiered on 2/15/2013 at the University of Colorado ATLAS black box theater. This version features the Orava String Quartet performing Aether at 7’43″ whereas the live performance is played with SuperCollider and iPad using my custom OSC interface Un:Limit.

Canvas from Cole Ingraham on Vimeo.

Post to Twitter Post to Facebook Post to Google Buzz Send Gmail

Creating Surround Projection with Blender 2.58a (first attempt)

August 11, 2011 in News, Visual

For an upcoming project I needed to make use of a six screen surround projection system. When working with 2D orthographic animations this is rather trivial as you can basically wrap a scene that has a width equal to all six screens around the space and make things jump from one edge to the other no problem. However when I started dealing with 3D perspective cameras a few problems became apparent: the frustums from each camera need to be aligned properly or objects will either disappear between screens or show up in two at once, and there will be some distortion at the edge of each camera.

For the first problem, I had to make a compromise between the physical layout of the projection system in the space and the constraints of the cameras. The room is rectangular with two projectors on each long side and one on each end. Trying to represent this directly with the cameras would produce horrendous results so in the end the cameras took precedence. All six cameras are set to have a viewing angle of 60 degrees and are arranged in a hexagon. This makes the edges of one viewing area coincide with that of its neighbors, thus solving the disappearing/double vision issue.

This solution would be perfect for a hexagonal screen setup but in the rectangular orientation it emphasizes the distortion at the camera’s edge; especially between two screen on the same wall. Horizontal key-stoning would help minimize this while distorting the rest of the image slightly. In the end, as this project is rather abstract to begin with, I chose to leave in the distortion.

The next hurdle is actually rendering the scene. Blender can only render one camera at a time so a bit of tweaking is involved. To start with, since I was going to distribute my rendering across multiple computers, I decided to let each computer handle a different camera: I put a copy of the .blend file on each machine, set the appropriate camera and output folder and start (blender’s built in network rendering wasn’t behaving for me or else I would have used that instead). While this is one perfectly good solution, a better one I figured out after starting (and that I will use for the other parts of this project) is that you can make multiple scenes, each set to a different camera, and using the compositor render them each to a different File Output node. I am using quite a lot of compositing already so this would simply require making a group from those nodes and running each camera through an instance of it. Sound like a job for library linking!

More on this as as I figure out new tricks =).

Post to Twitter Post to Facebook Post to Google Buzz Send Gmail

Organic Cage (excerpt)

August 10, 2011 in News, Video, Visual

This is a small segment from an 11 minute 6 screen piece being created for my performance in University of Colorado’s ATLAS Blackbox theater. Here the audience sits inside an organic form which is constantly distorting and moving around the space.

Created entirely using Blender 2.58a.

Post to Twitter Post to Facebook Post to Google Buzz Send Gmail