[Tig] NAB: Red digital cinema
glennchan at gmail.com
Tue Apr 24 20:05:33 BST 2007
I doubt that Red spends a lot of money on its marketing (haven't seen ads
anywhere), yet they've managed to sucker a lot of people into talking about
its camera. Including me. So here goes...
--Their demo footage was a short film directed by Peter Jackson, shot over
two days with their prototype camera. Most people who saw the 4k screening
--The footage was graded on a Quantel Pablo. Of course the problem with
color grading footage is that the colorist's talent can add to the footage,
and then there are some people who don't share the same taste (i.e. some
people don't care for the CSI look; I think CSI looks amazing).
--Once the Red team figures out a good way of posting stills and video at
good quality they will do so. There is a sample clip up at
http://www.reduser.net/forum/showthread.php?t=1883 and some stills on their
website. Though it looks like the JPEG/H264 compression might be adding
artifacts not in the footage, and some of the stills have been taken down
(image defects not found in the original footage?). Please be aware that
the stills have color management that may be ignored in most browsers; if
you are on PC, I believe you can take them into Photoshop and assign them
the adobeRGB profile.
--I wasn't impressed by the Sony 4K projector- black levels looked fairly
high, so the image doesn't have a nice snap/contrast to it.
--The aesthetic of the image does not look like film. It's probably better
to think of it as DSLR-like. Compared to film, it doesn't have the tonality
or the color bending that goes on... i.e. highlights don't de-saturate,
there is little change in colors as you go from light to dark. No film
grain, very little noise. It's a different aesthetic and it's subjective as
to whether you like it or not (sort of like "warm" analog sound versus
--Exposure latitude looked pretty good. They shot aerial footage of a
dogfight. In these shots, there were details in the clouds (and you can see
a little sun shining through them). On the shadow side of the plane, you
can see detail in the markings.
--The footage projected looked very clean. Very little noise.
Other Red notes (non-screening footage):
--There were some previous greenscreen tests shot by David Stump. The
footage (with compression) looks like it keys very very well (better than
some 3CCD cameras in my opinion) and it looks like you can pull secondaries
on the image easily.
--Redcode compression looks visually lossless at normal viewing distances.
It damages image quality less than 4:2:2 chroma subsampling (this is
comparing uncompressed to Redcode to 4:2:2; the first two TIFFs were
available a while back). When you evaluate the footage WITH compression (
i.e. the 4k screening at NAB was Redcode compressed), it is very clean and
arguably more noise/artifact-free than other uncompressed acquisition (i.e.
fast film stocks, looks cleaner than Dalsa footage IMO, etc. etc.).
--The raw images are blurry. There needs to be blurry glass (optical low
pass filtering) over the sensor to avoid aliasing. It is probably
appropriate to add sharpening. Or on downsampling to 2K/HD, it would be
appropriate to choose algorithms that favor sharpness over lack of aliasing
(since the blurriness reduces aliasing artifacts on downsampling).
--A good and easy-to-understand crash course on sampling theory is
--They are very much aiming for a data-based workflow. You push data files
around, as opposed to sending material over SDI or sending tapes around.
--The workflow for Final Cut looks good as you can edit the footage without
transcoding or log&capture (goodbye tape shuttling).
--It does not look like it will slide right into existing workflows. i.e.
there is no Red VTR that spits out SDI. (Virtual VTRs and DDRs might
provide a solution to that??) You may need to adapt to a data workflow
(which may or may not be more efficient).
--As with any digital acquisition, you will need to figure out archival.
Also talk to your completion bond people.
--Redcine does not seem analogous to telecine as it does not do 3:2
pulldown, you don't transfer to tape, and doesn't handle audio sync. (??)
i.e. if you don't record any audio into the camera, I don't believe Redcine
facilitates syncing audio. If you need to offline/online in a
59.97itimeline, lack of 3:2 pulldown may be an issue??
--More workflow details will likely emerge when people get their hands on
the software tools (Reds codecs, Redcine / format conversion tool). A lot
of the workflow details may change as they announce new features and
whatnot. It does look like the Red team is paying attention to workflow and
listening to feedback.
--The Redcode RAW recording mode records RAW data with wavelet compression.
This is very similar in concept to shooting RAW on a DSLR. Image processing
tasks like white balance are added in Redcine (the RAW conversion app).
--Recall the magenta filtering / green cast debate with the Viper camera.
To white balance an image, you multiply the channels by some number (above
1). In one/two of the channels, you now end up with values that exceed 100%
white. A simple white balance algorithm will clip these values, throwing
away dynamic range.
--Adding optical filtering will white balance the image, so you don't need
to throw away dynamic range in that fashion.
--However, Redcine adds a new twist to this issue. Suppose you didn't add
optical filtering to white balance the image. Instead of clipping the
channels when you white balance, you can extrapolate the missing information
in non-clipping channel. Essentially, you make an educated guess / you make
stuff up. This lets you create fake dynamic range for ~1-2 stops. This is
what raw conversion apps (i.e. in Photoshop) do with their highlight
--Of course this trick won't work in all situations. In such a situation,
you can dial back the highlight recovery to whatever works. If highlight
recovery isn't working at all, you can just clip the channels like before.
--Redcine will also do white point adaptation (much like in DSLRs). i.e.
Recall that in consumer CRTs, the white point is very high/blue (i.e.
9300Kish) to make the image look brighter. This makes reds appear less
saturated. To compensate for this, a lot of consumer CRTs will
over-saturate reds and tweak its hue.
--So in a white point adaptation algorithm, you adjust the colors to make em
look right. You adjust from your scene's white point (3200K, 5600K,
daylight, etc.) to the display white point (i.e. D65, P3, 5400K).
--Bruce Lindebloom's website has some information on the math involved.
--I personally disagree with this approach. You want images to look
consistent with each other, not to what you saw on set. But I can't say
that I've tried this or seen pictures myself. This is likely not a big deal
as Redcine will likely offer you the flexibility of turning this on/off; it
shouldn't hurt to have options.
Currently pursuing software development
More information about the Tig