For an upcoming project I needed to make use of a six screen surround projection system. When working with 2D orthographic animations this is rather trivial as you can basically wrap a scene that has a width equal to all six screens around the space and make things jump from one edge to the other no problem. However when I started dealing with 3D perspective cameras a few problems became apparent: the frustums from each camera need to be aligned properly or objects will either disappear between screens or show up in two at once, and there will be some distortion at the edge of each camera.
For the first problem, I had to make a compromise between the physical layout of the projection system in the space and the constraints of the cameras. The room is rectangular with two projectors on each long side and one on each end. Trying to represent this directly with the cameras would produce horrendous results so in the end the cameras took precedence. All six cameras are set to have a viewing angle of 60 degrees and are arranged in a hexagon. This makes the edges of one viewing area coincide with that of its neighbors, thus solving the disappearing/double vision issue.
This solution would be perfect for a hexagonal screen setup but in the rectangular orientation it emphasizes the distortion at the camera’s edge; especially between two screen on the same wall. Horizontal key-stoning would help minimize this while distorting the rest of the image slightly. In the end, as this project is rather abstract to begin with, I chose to leave in the distortion.
The next hurdle is actually rendering the scene. Blender can only render one camera at a time so a bit of tweaking is involved. To start with, since I was going to distribute my rendering across multiple computers, I decided to let each computer handle a different camera: I put a copy of the .blend file on each machine, set the appropriate camera and output folder and start (blender’s built in network rendering wasn’t behaving for me or else I would have used that instead). While this is one perfectly good solution, a better one I figured out after starting (and that I will use for the other parts of this project) is that you can make multiple scenes, each set to a different camera, and using the compositor render them each to a different File Output node. I am using quite a lot of compositing already so this would simply require making a group from those nodes and running each camera through an instance of it. Sound like a job for library linking!
More on this as as I figure out new tricks =).