[Tig] Red Camera: Pro & Con

glenn chan glennchan at gmail.com
Thu Feb 19 21:33:42 GMT 2009


My two Canadian cents...

As far as evaluating cameras go, I don't think that we should theorize based
on specs.  Oftentimes, people will cherry pick specs (or technical
theories/models) to support a particular position.  This is often the case
when one manufacturer is disparaging another manufacturer.  I lean strongly
towards the "just look at the pictures" approach (especially in blind tests
to rule out psychological effect such as price discrimination, 'furniture
effects' on clients, etc.).

--Perceived Sharpness--
Here's a good article on how we perceive sharpness:
http://www.cambridgeincolour.com/tutorials/sharpness.htm
Resolution of the imaging system, in my opinion, has a very small effect on
perceived sharpness.  I think the trap here is that people do not realize
just how strong sharpness tricks are.  Try resizing an image in Photoshop to
half size, then back to original size using the Bicubic Sharper algorithm.
The resulting image usually looks sharper, even though you've throw away a
lot of information.

As far as measuring camera systems go, I would suggest making a test pattern
similar to an eye chart and have text of decreasing sizes.  The point
cameras at the test pattern and shoot that, with some slow pans across.  By
this measurement, you want to see how well you can resolve the finest text
and sharpness tricks won't really work here.  This would approximate the
most taxing real-world scenarios (unlike zone plates).
Or, you can point the camera at a zone plate and figure out the resolution
that way.  From what I've seen, the 4K Bayer cameras tend to have a lot more
resolution than the "HD" cameras (i.e. ones that record to HD format).

--Sensor Arrangements--
The main types of sensor arrangements are:

Bayer (RGGB); most common kind of Bayer
Bayer RGB stripe
3-chip pixel shifted (different variations)
3-chip co-sited
Foveon

As far as Bayer goes, their effective performance depends on the demosaic
algorithm employed.  The lower-quality algorithms will create mazing and
zippering artifacts, so that is sometimes something to watch out for.  (I
don't know much about RGB stripe but I would expect it to also have
problems.)

All the Bayer systems have pixels aren't co-sited or (effectively) stacked
on top of each other.  This can cause artifacts on black&white test charts
where the resulting image shows color artifacts / the color is misaligned
(black and white test charts should, of course, appear black and white
without color).  3-chip systems tend to exhibit similar effects because the
optics of the entire system (sensor + prism + optics) aren't perfect.  Take
a look at the comparison images on cinematography.net and you'll see this.
So effectively, all camera systems have color artifacts... the question now
becomes how much.
Bayer systems (and 3-chip pixel shifted) can do terribly on zone plates in
terms of color artifacts, though zone plates are highly atypical of real
world scenes.  In other situations, you might find that a particular Bayer
system does better than a particular 3-chip system, so it just depends on
what your taste in artifacts and what you're shooting.

Foveon suffers the problem the silicon being a poor color filter, so you
need significant noise reduction to deal with the color (which reduces
resolution...).

All these systems suffer from sampling artifacts.  You have to pick your
poison: at least one of (A) loss of resolution (B) aliasing (C) ringing
artifacts.  Ringing artifacts generally look the least bad when it comes to
image capture.  To get ringing artifacts, you have to oversample relative to
your final output.  This is why many/most 3-chip systems employ pixel
shifting.  So if you want sharp HD images with low aliasing (and lots of
ringing), then you need something like a 4K bayer sensor, 4K Foveon, 3
1920x1080 chips pixel shifted, 3 4K chips co-sited, etc.
The amount of optical low pass filtering affects the tradeoff between A and
B.  If you omit the OLPF, then you'd have high resolution and very high
aliasing (this wouldn't look very good).  So you don't really want 1920x1080
cameras with very crisp (e.g. high amplitude) 1920 lines of resolution.

The bottom line is that all the sensor arrangements have compromises.  There
are also compromises/tradeoffs like storage format, real-time processing
(hard to do high-quality RGGB Bayer demosaic in real-time), depth of field,
etc.

--Dynamic Range--
* Lower noise = higher dynamic range.  What you're really looking for is
lower noise in the camera.
* Higher depth of field can give the illusion of smoother rolloff in
highlights.  But this isn't actually capturing a higher dynamic range.
* Video knee and the s-curve transfer characteristic of film (or applying
s-shaped curves in color correction) can give the illusion of smoother
rolloff in highlights.
* You may be throwing away dynamic range in the image processing.
* You can throw away dynamic range in color correction by increasing the
gain and decreasing the pedestal in color correction.  Doing so will
increase contrast but also throw away dynamic range (shadows and highlights
will get clipped off).  Whereas applying a s-shaped curve won't clip detail
off.
You should do what looks good, but from a technical perspective (which
probably doesn't matter), this kind of color correction throws away dynamic
range.

* RAW processing:  Recording the image without white balance applied can
increase dynamic range.  White balance (done properly*) multiplies the
values of 1 or 2 of R, G, and B until R=G=B for objects that should be
achromatic.  *Some color correctors like the 3-way CC in FCP does not do
this, and doesn't really work.
You have to multiply these values by some number >= 1.0.  This will cause
some values to go above white level, and normally you will clip these values
off.  If you apply WB twice (e.g. you didn't nail it in the camera), then
this is even worse.  RAW recording+processing avoids double white balance.
Instead of clipping those values off, you can apply highlight recovery
algorithms to guess the missing data.  This works in most situations since
the dominant light source is a single color temperature.  It doesn't fully
work when you have lighting of different color temperatures (but it can look
ok).  A lot of the RAW processors do this for still images.  Red's software
tools does it for their camera.  The concept (recording without most image
processing, and highlight recovery in post) could be applied to other
cameras.

Glenn Chan
Colormancer.com
Toronto, Canada



More information about the Tig mailing list