This post was originally planned to be about the setup and use of Multi Stereo Camera Rigs, but as I began to type I realised I needed to fully explain Stereo Depth Compression. The reason we resort to MultiRigs in the first place! The fundamental nature of human stereo vision only allows us in real life to determine stereoscopic depth up to roughly 6 meters away from us. Most optometrists will test your long distance vision by projecting patterns on the wall roughly 6 meters in front of you. Beyond the 6 meter mark begins a process I like to call ‘depth compression’, effectively an exponential fall off of perceivable depth recognition. It finally ends at a point where we cannot determine depth based purely on stereo vision but instead we switch to using methods such as distance haze, scale of objects, and object occlusion (an object behind another or not). Our brains are pretty impressive when it comes to tricking us into thinking we can perceive depth when we actually cannot.
Now because I work in a digital environment where everyone talks in pixels 1920 or 2048, etc… I also measure my stereoscopic depth in pixels. Many stereographers talk about a percentage deviation, I work on a pixel deviation or separation. This separation measurement is the difference between two matching pixels on opposite left & right frames. See my post ‘Mechanics & Mathematics of Stereoscopy‘ regarding this calculation. So what, does 1px represent 1 meter in the real world? Not quite, as you saw in my original post we have a limited number of pixels (14px) to represent the entire depth of the shot. Many stereographers including Phil McNally of Dreamworks, myself included go well beyond 14px deviation… in fact we often reach values as high as 32px and possibly beyond for some unique shots! In my two examples in this post I kept it simple and only went up to 6px just to make the graphics simpler.
Depth compression increases visibly as you move your zero parallax closer and closer towards the camera, ultimately reaching a point where your entire scene flattens out. It will eventually reach the point where you may as well be filming in 2D. For example a regular head shot often has the subject placed close to camera filling most of the frame, meaning your zero parallax has more than likely been pulled forward too. If I were to examine the background of this shot it would more than likely appear incredibly flat.
So, as I move the zero parallax closer to camera the more the background will flatten out. This occurs because you slowly move all the usable depth pixels forward leaving possibly 1px to capture a few hundred meters in ‘virtual’ or 3D depth. This, like a low resolution camera, is simply not enough resolution because all that depth is then captured in barely visible sub-pixels!
Depth compression is good & a bad thing. Its a fact of reality and thus if your shots are setup carefully they can feel very natural and just like real life…. but often we need to trick the system, fake the depth just to suit the need of the shot or to enhance the depth of a shot. This is where the Multi Stereo Camera Rig comes into play which I will cover in a future post.
Note: Lensing plays an important roll in stereography too, many old school cinematographers continue to shoot using long lenses, 70mm and above, these lenses effectively flatten the image in 2D, and thus have a huge impact on stereography, resulting in ‘cardboard cut out depth’. The recommended lensing for stereography range between 20-40mm with the optimum roughly 30mm. Yes! That is a very wide angle lens but its pretty close to the true FOV of the cinema goer sitting in the middle of an average cinema.