Notes on Grain, What we See
[Tig] Fourier, depths of field, focus etc
- Peter Swinson, May 16 2007:
At the risk of going way back to where we started, out of focus images need more bits than in focus images. If they are Clean Video or highly degrained film transfers to avoid banding showing. Film images, even out of focus, scanned at a rez that reproduces the grain can get away with much much lower bit depths as the granularity dithers the smoothness. I did have posted in the tech area examples but they seem to have disappeared, I will re-post via Rob.
While on the subject, I always maintain that the large cinema screen offers another experience than even 70" projection or Plasma/LCD TV. due to the distance the viewer sits. Even with a 70" screen our eyes are not focusing at infinity. Whereas in a Cinema they effectively are.
I am told that as babies we rely 100% on stereoscopic vision to judge distance, however as we grow we quickly rely less on stereoscopic vision and rely much more on visual clues, such as relative sizes of known objects AND differential focus. Could it be that to offer a differential focus as a clue to depth in the projected image relaxes our visual system from having to pull & push focus or converge eye angle while our eyes are fully relaxed in a cinema. The visual system will remain somewhat confused in closer images say up to 150" screenings as our eyes are constantly pulling focus and converging to a non infinite distance. Ah What about those wearing glasses or contact lenses, I can't answer that one! Our visual system evolves only slowly, I guess it has changed little in 1000 years!
Regarding viewing angles and what we really see I have taken the liberty of sending Rob a very interesting image that I found on a website that I cannot now find. I hope the owners of the image, a University I seem to remember, will forgive me for placing it here. It shows what we really see at any instant vs what we think we see.
Most of what we think we are seeing at any instant is a stored memory of what we saw when our eyes subconsciously looked around the area a few seconds to minutes previously.
Try this to show that our color vision is narrow but our color memory is good. Ask someone to stare straight ahead while you slowly walk past them, from ahead to behind on one side, either wearing or carrying something with a distinct color. Ask them to tell you when they cannot see the color of the object any more. Now without telling them find another similar sized and shaped object, (just a piece of card will do) with a similar luminance but completely different color.
Now walk slowly from behind them to ahead, asking them to tell you when they can see the color of the object again. (Tell them it is the same object). If it works you will find as you walk to behind them they will only loose the color at their very peripheral vision. However as you walk to ahead of them with the different color they will claim to see the object with the old color until you are well ahead of them.
This indicates that our visual memory provides the "large image" while our color vision is very narrow. Even our mono vision is less than we think. Look at some print, stare at just one word, without moving your eyes at all, a difficult thing to do, force yourself to try and read the adjacent words!
I should I suppose give up all this stuff and concentrate aon BRRE !
- Adrian Thomas Wed May 16 06:02:42 PDT 2007
...but any frequency space transform is going to show blurred images to have only lower frequency components. Bit depth would normally be a constant so the reduction in the higher frequency components and the inevitable decrease in intra-frame differences will substantially reduce data rates. Any type of Huffman coding will also likely show a much reduced data rate owing to the generally low contrast of blurred images. As for contouring, you should only get it if you were going to get it anyway - ie your dynamic resolution is too low.
-- Adrian Thomas Automatic Television 35 Bedfordbury London WC2N 4DU www.autotv.co.uk --
--Rob Lingelbach 22:00, 16 June 2009 (UTC)