View Single Post
  #52  
Old February 24th 08, 12:59 AM posted to sci.space.shuttle,sci.space.history,sci.space.policy,sci.space.station
BradGuth
external usenet poster
 
Posts: 21,544
Default Great missions STS-122 & Expedition 16

My goodness, aren't we chuck full of damage-control infomercial
crapolla, of such fancy mainstream words upon words and eye-candy
hype, yet oddly sill managing to not address what I'm after. Once
again, you've intentionally excluded the focus or intent of what truth
there is to behold, about CCD camera DR and of FWC saturation that
isn't getting utilized because it would show off too much of the cold
hard truths about items other than Earth that'll unavoidably show up
in those images (unless artificially removed by those in charge of
snookering and dumbfounding humanity for all it's worth, and then
some).

Pictures from space via ISS, of somewhat old images of Earth by way of
their Kodak DCS760 camera with its 12-bit limited DR(dynamic range),
3032 x 2008 pixels and sensor format area of 27.65mm x 18.43mm = 9+
micron pixels.
http://www.nasa.gov/vision/universe/...es/aurora.html
Auroras Dancing in the Night 02.12.04
Aboard the International Space Station, Expedition 6 Science Officer
Don Pettit offers a unique perspective on auroras.


Seems perfectly good enough eye-candy, whereas truly scientific CCD
cameras of the same era with nearly that size of pixel and of 16-bit
DR should have been the norm for anything of ISS associated with their
EVC or instead of the DCS760, even if having to be of monochrome and
using 3 or 4 specific color spectrum filters for creating the
composite colour renditions would be a whole lot better science,
although full colour CCD renditions from IR to UV at 16 bit DR can't
be all that insurmountable, especially of larger format CCDs having
starlight sensitivity and fast enough frame scans for low noise video
capture applications, as otherwise with commercial video equipment if
need be you can always incorporate three or even four individual CCDs
per color video camera.

Obviously our MESSENGER mission of using CCDs and mirror optics is yet
another prime and spendy example of what not to do, because their
scientific composite color images were absolutely pathetic, and they
could have used another 10X in their telephoto capability.

Also remember, as similar results with over-saturating film, except
saturated CCD pixels offer vastly superior spectrum bandwidth that can
also have their FWC(full well capacity) exceeded without harm,
allowing those other less saturated pixels available to better record
whatever's dim or of far +/- spectrum items with much greater ease
than film because those CCDs exceed in DR as well as in their scope of
IR/UV spectrum detection.

Obviously, with such impressive eye-candy is why most folks are so
easily mislead to think we're it, the one and only intelligent species
within this universe, and as such we're so often being fooled into
only detecting upon whatever's within the visual spectrum.

BTW, notice the 12-bit limited hue/color saturation and of having
easily including them pesky stars above Earth, and to notice Earth
isn't even the least bit over saturated, is it.
http://www.nasa.gov/vision/universe/...es/aurora.html
Too bad there's not the original 18 MB image files to look at, as
those images would be absolutely terrific.

I bet you and others of your silly infowar of eye-candy spewing kind
don't even get the drift of what this sort of Kodak DCS760 digital
camera dynamic range represents. Now try to imagine what a 16-bit CCD
camera w/o optical spectrum limitations would accomplish, or even by
their existing 12-bit if simply having allowed for greater FWC
saturation (meaning longer exposure and/or of a lower optical f-stop).
.. - Brad Guth


columbiaaccidentinvestigation wrote:
On Feb 22, 8:52 pm, BradGuth wrote:" Another
interesting second or third hand rant, even though you are intent upon
playing word games, rather than offering us science that can be peer
replicated as to what I had previously specified."

laughing your quoting numbers like "spec. candy", as if you really
understood what the numbers mean, much less that the ccd is but one
component of the digital imaging system. Ok lets try it this way
humans are either going to view the image in the print form or on a
monitor, both of which have their own dynamic ranges from black (min),
to a white (max), that is independent of the ccd's dynamic range. Now
venturing into the area of human color perception is necessary if
someone wants to reproduce what the human "sees", as the sequence goes
from humans capturing the image, image processing and finally viewing
of the image on paper or on a monitor. Now even if you take humans out
of the image capture part of the equation, you still have to have the
image processing for viewing, meaning the ccd's capabilities are just
part of the system, just like the silver halide crystals on film are
but one part of the sequence that reproduces what the "eye sees".
So I think its funny that you seem to be stuck on comparing a devices
capabilities against the human visual system without understanding the
latter So actually I did answer your question, you just did not
understand the answer, nor how it directly relates to your constant
ranting about dynamic range, that is why you go from regurgitating
numbers of a chips specifications, to questioning the results of the
image, when you really don't know what is going on in the optics of
the device or the human visual system.
The human visual system specifically analyzes and compares the colors
of the electromagnetic spectrum with photo pigment responses (short
medium and long cones, with contributions from the rods) spanning a
range of about 1.3 electron volts of energy covering from reds to
blues, and through primary and secondary comparisons our brain
perceives what Newton termed extra-spectral hues, completing a
connection of low energy reds to the high energy blues. The concept
of human color perception is not just describing eye candy, as the red/
green and yellow/blue opposition responses in the human eye are what
make up the cie Lab color space axis where ALL the data from the ccd
image is mapped. Cie lab space is a 3d Cartesian plane based on the
results of experiments that studied how the human visual system
responds to stimuli. Luminosity is represented on the z axis going
from absolute black to white (0 to 100), with red/green yellow/blue
opponency represented on the x, y axis +/-100 in either direction: the
x axis is lower case a, going from greenish (-a), toward being reddish
(+a) (red vs. green), the y axis is the lower case b and goes from
bluish (-b) towards yellowish (+b) yellow vs. blue. Now given all the
data captured by any ccd has to be mapped to this coordinate system,
and given the fact that same system is based on the human visual
system you need to learn a lot more because cielab space itself does
not contain all of the colors the human eye can detect, i.e. its
missing some greens and extra spectral hues (see lab gamut display
brucelindbloom.com) The Most expensive professional digital cameras on
the market do not allow the image to be mapped to custom color spaces
or color profiles but instead utilize srgb, or adobergb, which does
not encompass all the volume of the cielab space, resulting in an even
smaller range of colors than what the human "eye sees". (On a side
note possibly the reason for colormatch being the profile for the
messenger mercury probe images, is simply because even though
colormatch color space is small, no data is lost in image transfers
after the image has been mapped to that particular color space) . The
capacity of a ccd can be reduced to specific wavelength ranges by
manner of filter wheels, or template overlaying on the ccd, but you
are still counting photons in bins, which as I previously stated is
different from how human color perception is achieved through relative
comparisons of photo pigment responses (CIE 1931 XYZ color matching
functions see nasa ames color research lab), and comparisons of those
comparisons.
One of the unique aspects of the human visual system is that it
attempts to preserve an objects color even in lighting conditions that
change from approx 10,000 Kelvin's (midday bluish), to about 2700
Kelvin's (sunset and sunrise reddish). The illuminant whether it is
natural or artificial will have a spectral power distribution that can
be represented by a tristimulus value, which will be the "white" in
our field of view. The tristimulus of the illuminant can be
represented in chromacity diagrams, showing the white point of that
illuminant, 5000 Kelvin's = d50, and 6500 Kelvin's = d65, indicating
the warmness or coolness of the particular white. Now the white point
of the color space that the ccd image is mapped to is set by the color
profile itself (see brucelindbloom.com for profiles and info), and
therefore even though digital imaging devices allow the user to set
the color temp, and white balance, is still is using discrete
settings, which will then be mapped based on the color spaces white
point, which is much different from how the human eye adapts to
changing light conditions.
An image captured on film or on ccd, is metered so the energy received
over a period of time does not overexpose the best detail in the
desired subject, which is accomplished by adjusting the exposure time
or the lens iris diameter (f/stop), based on the films speed or
equivalent digital settings. No imaging system is perfect, as
optically there are trade offs, smaller lens iris diameters yields
larger depth of field, but less resolution and longer exposure times,
while larger lens iris diameter settings yield faster exposure times
and high resolutions due to the greater amount of light being
received, but at the expense of the images depth of field. All of
these components whether they be manually set, or auto metered,
determine what objects will be the shadows, highlights, and mid tone
ranges of the image, (meaning those variables set the "blackest black"
minimum luminosity or "whitest white" maximum luminosity in the image.
Therefore an images range from maximum to minimum luminosity is not a
function of the ccd or films range alone, but the amount of light
received from the objects being viewed based on a number of variables
(lens, f/stop, film speed/ccd specs, exposure time, desired zones,
developing times for film, and print/image manipulation. Please see
"zone system", and you will find that any image is a balance of
capturing the subject's details in zone V, while at the same time not
blowing out the details of the shadows zones III,II and highlights
zones VII,VIII which is much different than the specs of the film or
ccd (see luminous-landscape.com below for the simplified zone system
description). Ok now, the whitest or most luminous object will be
mapped as the highest point on the cielab z axis for that image, (with
the slight biases introduced from the tristumulus value), the black or
least luminous object will be the lowest point on the z axis with some
biases, where the dynamic range is the difference of luminosities,
minimum to maximum, and the logarithmic relationship from one to the
other is the gamma (the connecting grey values in between). Printer's
dynamic ranges are determined by the ink/media relationship or how
much ink can be placed on the media usually making a CMY black,
without running bleeding or buckling. A specific profile for printing
is not just unique to that device, but is unique to that paper and ink
set as well and requires setting the ink limits (described above) and
is then followed by the careful balancing the colors and grays, that
make the full color range or palette the printer can produce, meaning
the images produced from a printer have a dynamic range that is a
function of the inks and paper, and not the ccd. A monitors dynamic
range is determined by the quality of the blackness of the screen, as
compared to the best balanced and whitest white that can be achieved
from the phosphor emissions, but the problem is a monitors phosphors
change over time, meaning that the dynamic range is, dynamic, no pun
intended, but that's the problem with monitors and keeping them
calibrated. An images dynamic range on film can be determined by
measuring the differences in the film densities in the most dense
region (least luminous) setting the minimum, compared to the least
dense (most luminous) setting the luminosity maximum, on a
densitometer, which will show the final dynamic range is much less
than the film is capable of producing, and was not just determined by
the films range, but by the specifics that adjusted the exposure
setting to the lighting conditions when the image was taken. Which
once again shows that the human perception and color constancy are
pretty unique attributes of human adaptation when were are compared to
a device like a ccd or material like film.
Now the greater amount of bits, the more information, but that
information does not increase the dynamic range, it only parses the
grey scale in between the minimum and maximum into finer sections,
resulting in slight differences in the shadows, and mid-tones
especially, but the same cielab space is utilized with the same
limitations, the differences between 8 an 16 bit images are the 16 bit
data is just parsed a little better resulting in smoother transitions,
meaning 16 bit images aint all that you make them out to be. So
therefore analyzing colors (or what you want to imply is missing) from
an image strictly based on the ccd's or films specifications alone is
not logical, and will yield incorrect results because that analysis
does not the completely take into account system involved with
producing the image.
So brad, yes ccd's have great capabilities, but humans capturing the
image can describe the object with words in such a way that
compliments what the ccd produces, as humans are part of the viewing
end of the equation, so therefore they should be on the image
capturing end to better qualify the observed phenomena. Therefore
once again it is human nature to creatively/subjectively describe
events and sights (an observed events colors) with words that present
a feeling to the reader that is far beyond the characters composing
the text, and that is why humans must be part of space travel...

Color Research Lab NASA Ames Research Center
http://colorusage.arc.nasa.gov/lum_and_chrom.php


Rochester Institute of Technolgy
Munsell Color Institute
http://www.cis.rit.edu/mcsl/

Information on color spaces, color conversions, etc.
Bruce Lindbloom's website
http://www.brucelindbloom.com/

Simplified Zone System
http://www.luminous-landscape.com/tu...e_system.shtml

Exposure value calculations
The Science of Photography
http://johnlind.tripod.com/science/scienceexposure.html