Judging by the presence of 3D camera technology at Photokina this year, 3D is gaining some serious momentum with manufacturers. For professional photographers though, the last year has been a time of experimentation with a brand new "old" technique. Especially if you don’t have a 3D camera, how do you get the 3D effect?
Markus Berger at Red Bull Photofiles gave this video tutorial on how to perfect the photoshop method. Pro photographer Ian Coble used the other logical approach – he used two cameras. Read on to find out about his shoot with pro kayaker Tao Berman.
What brief did you get for the shoot?
What made this shoot so incredible wasn't just the sheer athleticism of Tao in front of the camera, but the amount of creativity I was afforded. When organizing the shoot, Red Bull essentially gave me free reign to shoot it however I wanted.
When did you start getting interested in 3D?
For the last few months I've been dying to try shooting something in 3D. Since I saw the James Cameron movie Avatar, I wanted to test 3D technology and see how it translated from video to still images. When this shoot with Tao came about, I knew this was the shoot to make it happen.
3D photography is still pretty new. What research did you do for the shoot?
I'd come across plenty of other 3D photos, but none of them were action or motion based. Everything I was coming across was static – whether it was a landscape, portrait or still life. Not finding any 3D (also called anaglyph) photos of sports got me really excited. This was going to be something relatively new. Also, it's always fun to be the guinea pig on new things as you never know what you're going to encounter or how it's going to turn out.
What shooting method did you use?
With new versions of Photoshop, it's now easier to create 3D images in post-production with a single camera and manipulate the single resulting image. But that's not what I wanted to do here.
With this shoot, I wanted to achieve a true 3D image, by shooting two cameras offset from one another. The advantage in using two cameras is that the resulting 3D image has more detailed depth and texture as it does not require Photoshop to extrapolate and create new information. Even with two camera method though, you still have to do some post-production editing.
When you have your two images, what post-production work is required?
The basics behind creating a 3D image in Photoshop are to stack images in layers. Once there, you have to determine the focal point of your image and align the two frames. From there, you have to remove the red channel from the right eye‘s image and remove the green and blue channels from the left eye’s image.
You can do this for example in the levels window by selecting the appropriate channel and changing the output level from 255 to 0. Once you have a right eye image (which will look green) and a left eye image (which should appear red) you need to adjust your blending mode from normal to screen. This will leave you with a 3D image that you can make any final density or color corrections to.
What camera settings did you use?
I shot these images with 2 Nikon D3’s. Both cameras were set to manual exposure mode with a shutter speed of 1/500th and an aperture of f/ 5.6. Given the dark nature of the canyon we were shooting in, I had to bump the ISO up to 1600 in order to be able to shoot at a fast enough shutter speed to freeze the action.
I set the focus of both cameras by pre-focusing on a rock near the lip of the drop. Once set, I locked the focus off so that it wouldn’t slip during the sequence.
How did you mount the camera?
I mounted one of the cameras on a Manfrotto tripod with a Manfrotto 3265 joystick head. The second camera was mounted on a Manfrotto 244 Magic Arm, which was clamped to one leg of the tripod. This positioned both cameras on a relatively even plane, which would not have been achievable with two tripods, given the rocky terrain of the river bank.
Did you have to experiment to get the right distance between the camera bodies?
Determining the distance between the camera bodies was quite tough to figure out. I had to do a lot of research online, and eventually discovered that the ideal distance apart between the cameras is determined by how far away your subject is.
An easy way to determine the distance between cameras – this isn’t 100% accurate, but it’s pretty close – is to separate the cameras by a factor of 1/30th of the distance to the focal point of your frame. The further away the subject is, the further apart the cameras must be in order to achieve a 3D affect. For this location, I worked out that a distance of about 12 inches (30 centimeters) would provide enough separation to give the resulting image enough depth.
When shooting 3D, the cameras have to be perfectly level – or at least on the same angle “off” of level – or the resulting image will cause the viewer to get a headache as their eyes try to focus on two non-corresponding horizons. To achieve a level frame on each camera, I secured my iPhone to each camera and used the iHandy Level App to zero in on the horizons.
Did the cameras have the same lenses?
Yes, both cameras had the same lenses on them (a Nikon 28-70mm f/2.8 AFS lens). Both cameras have to have an identical field of vision for 3D to work, so both lenses need to be the same.
How did you trigger the cameras at the same time?
I triggered my cameras with two Nikon MC-30 remote trigger releases. I sandwiched the two releases together and pressed down on the triggers at the same time. I practiced this at home prior to the shoot in order to make sure that both cameras would fire at exactly the same time.
I experimented with a few other methods, including remote triggering with pocket wizards, but the MC-30 route gave me the most reliable results. Right now, I’m in the process of re-wiring the MC-30’s for future shoots so that one trigger will fork off to each camera and eliminate the need to press two triggers simultaneously.
What challenges did you have on the day?
Given the inherent danger in running waterfalls, and not wanting to subject Tao to any more danger than necessary, we only had a few attempts to make the shoot work. Just to make sure, we used 3 cameras at all times. I had two cameras shooting 3D and one camera shooting an alternative angle to ensure we had maximum coverage and guarantee differing angles and vantage points.
The other challenge was the inability to check our results in the field. Given the remote location, the limited amount of daylight we had to work with and the amount of time it takes to process a 3D image, we weren’t able to review the 3D image on location. All our research had to be done in advance and we had to trust that what we were doing was accurate. In this age of digital, it’s tough to go back to the times when you can’t check your work in the field and make adjustments.
Were you happy with the final results? What could be enhanced or experimented with?
In the end, the shoot turned out great. The 3D image turned out better than I could have hoped.
For future shoots, in addition to re-wiring my MC-30 remote triggers, I’m also trying to fabricate a sliding mounting bracket that allows two cameras to be mounted on the same tripod. This will allow me to make quick adjustments to the separation distance between the cameras. The method I employed on this shoot worked, but it wasn’t really efficient for making quick changes.
Additionally, I’d love to shoot at a location that has more depth to it. The location of this waterfall didn’t have a lot of separation between the foreground and background. I’d love to experiment more with a location that provided more depth to it, as I think the resulting 3D image would turn out even better.
I'm excited to put this technology to use again on some more shoots in the near future, stay tuned!
www.iancoble.com
Double-vision: shooting 3D with 2 cameras
Double-vision: shooting 3D with 2 cameras