Social Icons


Monday, June 17, 2013

Film vs Digital: here's the reality.


We're drilling down to the essence of the difference between analogue and digital. Here's Phil Rhodes' take on this persistent question. It's a fascinating read, and is pretty definitive on the subject
I write hoping that the subject of this article – the move from film to digital, as if that's a complete description of such a complex situation – is no longer controversial enough to provoke a lot of angry mailbag. Some of what's to come in this discussion of what's been lost and gained will be familiar to most people who've shot both formats, but I hope to explore one particular point that's of waning importance now but did a lot to define the debate at the beginning of the changeover, and which has left echoes that are are audible to this day. What I'm talking about here is the difference between shooting film and postproducing that film in a digital environment, and shooting digitally in the first place, a distinction which was and is frequently glossed-over in discussions of the relative merits.
The first thing to be clear about is that, despite the nostalgia, film is far from free of faults. Grain is, despite its intermittent fashionability, noise. Being a fundamentally mechanical animation system, film is subject to to instability and flicker, dirt, scratches, colour variations, and other problems, all of which are inevitably present to at least some tiny degree in all film. I'm going to overlook the corner case in which these faults are artistically sought after, as they're easy enough to simulate in either medium.

The worst of all worlds

Modern camera and handling techniques reduce the effects of these problems to low levels, but the operative point is that no matter what the problems of film may be, once we digitise film-originated footage, for all that it provides the enormous capability and flexibility of digital postproduction, we inevitably suffer the worst of both worlds: all of the problems of film, and all of the problems of digital. It's only the fact that both worlds control these problems so well that digital intermediate was ever acceptable. Even so, ultimately, when we scan film and treat it digitally, especially when we scan at 2K, it's reasonable to fear that we are often losing a noticeable amount of information.
This happens for the simple reason that digital images are made up of a grid of squares, which have almost nothing to do with grains on the film image. It's necessary to have at least several pixels per film grain to completely capture the information of the film image, although the result still won't look as sharp as if that same digital image had been filled with digitally-originated information at its Nyquist limit. This oversampling requirement is an intrinsic problem associated with the conversion of one imaging format to one that operates on a fundamentally different technology base, and means that, on top of the worst-of-both-worlds issue, scanning 35mm film is an inefficient game at best. 2K film scans are, notoriously, less sharp than 4K scans of the same material, even when the original frame doesn't appear to have 2K of information in it, for this reason.
If you originate digitally, of course, you're sidestepping grain, instability, and all the other problems, and you're likely to fill your digital images with more information than the film scanner could, pixel for pixel. It's possible to buy cameras which are quite competitive with some sorts of 35mm film for the cost of a couple of rolls of stock, not even including processing, and the idea that film can in any way be price competitive is sheer fantasy.

What do we lose

The question, though, is of what we lose. While the random pattern of film grain may be hard work on scanners, it does provide for higher apparent resolution for a given number of what we might call image particles, as the grain pattern changes every frame. The colour gamut – the range of colours which can be represented – is almost invariably larger on film, even though some recent cameras, such as F65, have begun to address this issue. And the big deal, for most people, is highlight handling; even people who aren't technically aware of this issue tend to be able to identify poor highlight control as “not looking like a movie”. Those of use who were, often through financial necessity, early adopters of digital technology became adept at lighting and exposing to protect highlights. Lighting control devices such as flags, scrims and graduated filters are key tools in the fight against this problem, which can become fiddly and time-consuming. The most modern cameras, such as Alexa and even Sony's cheap but impressive 13-plus-stop FS700, do better here, but it's still an issue. Digital distortion – clipping – is still less desirable than analogue distortion. Much as we under-record digital audio in a high precision, high dynamic range 24 bit environment, we underexpose digital pictures in much the same way. The key issue here is the noise inherent to the camera, in exactly the same way that audio equipment designers work very, very hard to make quiet microphone amplifiers. It's really the same issue.

Digital costs us less...

Either way, though, what we might call direct digital origination costs us less in terms of actual image quality than the perhaps somewhat cumbersome insertion of a digital step in a photochemical production workflow. With cameras with 4K and more of real resolution, and colour gamuts improving all the time, it's becoming increasingly unreasonable to complain about the total quality of a digitally-originated picture. Nobody with any sort of aesthetic sense is completely blind to the differences, although I find it difficult to identify with semantically ambiguous phrases such as “film looks alive,  digital looks dead” that I've heard used, and by BSC members at that. Real purists will also complain about a lack of discipline on non-film sets. While I've seen failures of professional communication connected to the decision not to slate shots (when timecoded audio is being used on digital recorders), I'd argue that this is not specifically due to the technology. In that particular instance, for example, it was always possible to end up in the same no-slate situation with AatonCode, and almost nobody ever chose to do that.
In general, I think part of the reason people complain about digital origination may be that since there's such a huge library of mainly 35mm work out there, we're used to all of the greatest visual masterpieces – Blade Runner, etc – having been originated on film. Could it not be that we simply haven't had really good digital origination for long enough to produce a really memorable masterpiece?