A Space & astronomy forum. SpaceBanter.com

Go Back   Home » SpaceBanter.com forum » Space Science » Space Station
Site Map Home Authors List Search Today's Posts Mark Forums Read Web Partners

Great missions STS-122 & Expedition 16



 
 
Thread Tools Display Modes
  #51  
Old February 24th 08, 01:41 AM posted to sci.space.shuttle,sci.space.history,sci.space.policy,sci.space.station
columbiaaccidentinvestigation
external usenet poster
 
Posts: 1,344
Default Great missions STS-122 & Expedition 16

On Feb 22, 8:52 pm, BradGuth wrote:" Another
interesting second or third hand rant, even though you are intent upon
playing word games, rather than offering us science that can be peer
replicated as to what I had previously specified."

laughing your quoting numbers like "spec. candy", as if you really
understood what the numbers mean, much less that the ccd is but one
component of the digital imaging system. Ok lets try it this way
humans are either going to view the image in the print form or on a
monitor, both of which have their own dynamic ranges from black (min),
to a white (max), that is independent of the ccd's dynamic range. Now
venturing into the area of human color perception is necessary if
someone wants to reproduce what the human "sees", as the sequence goes
from humans capturing the image, image processing and finally viewing
of the image on paper or on a monitor. Now even if you take humans out
of the image capture part of the equation, you still have to have the
image processing for viewing, meaning the ccd's capabilities are just
part of the system, just like the silver halide crystals on film are
but one part of the sequence that reproduces what the "eye sees".
So I think its funny that you seem to be stuck on comparing a devices
capabilities against the human visual system without understanding the
latter So actually I did answer your question, you just did not
understand the answer, nor how it directly relates to your constant
ranting about dynamic range, that is why you go from regurgitating
numbers of a chips specifications, to questioning the results of the
image, when you really don't know what is going on in the optics of
the device or the human visual system.
The human visual system specifically analyzes and compares the colors
of the electromagnetic spectrum with photo pigment responses (short
medium and long cones, with contributions from the rods) spanning a
range of about 1.3 electron volts of energy covering from reds to
blues, and through primary and secondary comparisons our brain
perceives what Newton termed extra-spectral hues, completing a
connection of low energy reds to the high energy blues. The concept
of human color perception is not just describing eye candy, as the red/
green and yellow/blue opposition responses in the human eye are what
make up the cie Lab color space axis where ALL the data from the ccd
image is mapped. Cie lab space is a 3d Cartesian plane based on the
results of experiments that studied how the human visual system
responds to stimuli. Luminosity is represented on the z axis going
from absolute black to white (0 to 100), with red/green yellow/blue
opponency represented on the x, y axis +/-100 in either direction: the
x axis is lower case a, going from greenish (-a), toward being reddish
(+a) (red vs. green), the y axis is the lower case b and goes from
bluish (-b) towards yellowish (+b) yellow vs. blue. Now given all the
data captured by any ccd has to be mapped to this coordinate system,
and given the fact that same system is based on the human visual
system you need to learn a lot more because cielab space itself does
not contain all of the colors the human eye can detect, i.e. its
missing some greens and extra spectral hues (see lab gamut display
brucelindbloom.com) The Most expensive professional digital cameras on
the market do not allow the image to be mapped to custom color spaces
or color profiles but instead utilize srgb, or adobergb, which does
not encompass all the volume of the cielab space, resulting in an even
smaller range of colors than what the human "eye sees". (On a side
note possibly the reason for colormatch being the profile for the
messenger mercury probe images, is simply because even though
colormatch color space is small, no data is lost in image transfers
after the image has been mapped to that particular color space) . The
capacity of a ccd can be reduced to specific wavelength ranges by
manner of filter wheels, or template overlaying on the ccd, but you
are still counting photons in bins, which as I previously stated is
different from how human color perception is achieved through relative
comparisons of photo pigment responses (CIE 1931 XYZ color matching
functions see nasa ames color research lab), and comparisons of those
comparisons.
One of the unique aspects of the human visual system is that it
attempts to preserve an objects color even in lighting conditions that
change from approx 10,000 Kelvin's (midday bluish), to about 2700
Kelvin's (sunset and sunrise reddish). The illuminant whether it is
natural or artificial will have a spectral power distribution that can
be represented by a tristimulus value, which will be the "white" in
our field of view. The tristimulus of the illuminant can be
represented in chromacity diagrams, showing the white point of that
illuminant, 5000 Kelvin's = d50, and 6500 Kelvin's = d65, indicating
the warmness or coolness of the particular white. Now the white point
of the color space that the ccd image is mapped to is set by the color
profile itself (see brucelindbloom.com for profiles and info), and
therefore even though digital imaging devices allow the user to set
the color temp, and white balance, is still is using discrete
settings, which will then be mapped based on the color spaces white
point, which is much different from how the human eye adapts to
changing light conditions.
An image captured on film or on ccd, is metered so the energy received
over a period of time does not overexpose the best detail in the
desired subject, which is accomplished by adjusting the exposure time
or the lens iris diameter (f/stop), based on the films speed or
equivalent digital settings. No imaging system is perfect, as
optically there are trade offs, smaller lens iris diameters yields
larger depth of field, but less resolution and longer exposure times,
while larger lens iris diameter settings yield faster exposure times
and high resolutions due to the greater amount of light being
received, but at the expense of the images depth of field. All of
these components whether they be manually set, or auto metered,
determine what objects will be the shadows, highlights, and mid tone
ranges of the image, (meaning those variables set the "blackest black"
minimum luminosity or "whitest white" maximum luminosity in the image.
Therefore an images range from maximum to minimum luminosity is not a
function of the ccd or films range alone, but the amount of light
received from the objects being viewed based on a number of variables
(lens, f/stop, film speed/ccd specs, exposure time, desired zones,
developing times for film, and print/image manipulation. Please see
"zone system", and you will find that any image is a balance of
capturing the subject's details in zone V, while at the same time not
blowing out the details of the shadows zones III,II and highlights
zones VII,VIII which is much different than the specs of the film or
ccd (see luminous-landscape.com below for the simplified zone system
description). Ok now, the whitest or most luminous object will be
mapped as the highest point on the cielab z axis for that image, (with
the slight biases introduced from the tristumulus value), the black or
least luminous object will be the lowest point on the z axis with some
biases, where the dynamic range is the difference of luminosities,
minimum to maximum, and the logarithmic relationship from one to the
other is the gamma (the connecting grey values in between). Printer's
dynamic ranges are determined by the ink/media relationship or how
much ink can be placed on the media usually making a CMY black,
without running bleeding or buckling. A specific profile for printing
is not just unique to that device, but is unique to that paper and ink
set as well and requires setting the ink limits (described above) and
is then followed by the careful balancing the colors and grays, that
make the full color range or palette the printer can produce, meaning
the images produced from a printer have a dynamic range that is a
function of the inks and paper, and not the ccd. A monitors dynamic
range is determined by the quality of the blackness of the screen, as
compared to the best balanced and whitest white that can be achieved
from the phosphor emissions, but the problem is a monitors phosphors
change over time, meaning that the dynamic range is, dynamic, no pun
intended, but that's the problem with monitors and keeping them
calibrated. An images dynamic range on film can be determined by
measuring the differences in the film densities in the most dense
region (least luminous) setting the minimum, compared to the least
dense (most luminous) setting the luminosity maximum, on a
densitometer, which will show the final dynamic range is much less
than the film is capable of producing, and was not just determined by
the films range, but by the specifics that adjusted the exposure
setting to the lighting conditions when the image was taken. Which
once again shows that the human perception and color constancy are
pretty unique attributes of human adaptation when were are compared to
a device like a ccd or material like film.
Now the greater amount of bits, the more information, but that
information does not increase the dynamic range, it only parses the
grey scale in between the minimum and maximum into finer sections,
resulting in slight differences in the shadows, and mid-tones
especially, but the same cielab space is utilized with the same
limitations, the differences between 8 an 16 bit images are the 16 bit
data is just parsed a little better resulting in smoother transitions,
meaning 16 bit images aint all that you make them out to be. So
therefore analyzing colors (or what you want to imply is missing) from
an image strictly based on the ccd's or films specifications alone is
not logical, and will yield incorrect results because that analysis
does not the completely take into account system involved with
producing the image.
So brad, yes ccd's have great capabilities, but humans capturing the
image can describe the object with words in such a way that
compliments what the ccd produces, as humans are part of the viewing
end of the equation, so therefore they should be on the image
capturing end to better qualify the observed phenomena. Therefore
once again it is human nature to creatively/subjectively describe
events and sights (an observed events colors) with words that present
a feeling to the reader that is far beyond the characters composing
the text, and that is why humans must be part of space travel...

Color Research Lab NASA Ames Research Center
http://colorusage.arc.nasa.gov/lum_and_chrom.php


Rochester Institute of Technolgy
Munsell Color Institute
http://www.cis.rit.edu/mcsl/

Information on color spaces, color conversions, etc.
Bruce Lindbloom's website
http://www.brucelindbloom.com/

Simplified Zone System
http://www.luminous-landscape.com/tu...e_system.shtml

Exposure value calculations
The Science of Photography
http://johnlind.tripod.com/science/scienceexposure.html

  #52  
Old February 24th 08, 01:59 AM posted to sci.space.shuttle,sci.space.history,sci.space.policy,sci.space.station
BradGuth
external usenet poster
 
Posts: 21,544
Default Great missions STS-122 & Expedition 16

My goodness, aren't we chuck full of damage-control infomercial
crapolla, of such fancy mainstream words upon words and eye-candy
hype, yet oddly sill managing to not address what I'm after. Once
again, you've intentionally excluded the focus or intent of what truth
there is to behold, about CCD camera DR and of FWC saturation that
isn't getting utilized because it would show off too much of the cold
hard truths about items other than Earth that'll unavoidably show up
in those images (unless artificially removed by those in charge of
snookering and dumbfounding humanity for all it's worth, and then
some).

Pictures from space via ISS, of somewhat old images of Earth by way of
their Kodak DCS760 camera with its 12-bit limited DR(dynamic range),
3032 x 2008 pixels and sensor format area of 27.65mm x 18.43mm = 9+
micron pixels.
http://www.nasa.gov/vision/universe/...es/aurora.html
Auroras Dancing in the Night 02.12.04
Aboard the International Space Station, Expedition 6 Science Officer
Don Pettit offers a unique perspective on auroras.


Seems perfectly good enough eye-candy, whereas truly scientific CCD
cameras of the same era with nearly that size of pixel and of 16-bit
DR should have been the norm for anything of ISS associated with their
EVC or instead of the DCS760, even if having to be of monochrome and
using 3 or 4 specific color spectrum filters for creating the
composite colour renditions would be a whole lot better science,
although full colour CCD renditions from IR to UV at 16 bit DR can't
be all that insurmountable, especially of larger format CCDs having
starlight sensitivity and fast enough frame scans for low noise video
capture applications, as otherwise with commercial video equipment if
need be you can always incorporate three or even four individual CCDs
per color video camera.

Obviously our MESSENGER mission of using CCDs and mirror optics is yet
another prime and spendy example of what not to do, because their
scientific composite color images were absolutely pathetic, and they
could have used another 10X in their telephoto capability.

Also remember, as similar results with over-saturating film, except
saturated CCD pixels offer vastly superior spectrum bandwidth that can
also have their FWC(full well capacity) exceeded without harm,
allowing those other less saturated pixels available to better record
whatever's dim or of far +/- spectrum items with much greater ease
than film because those CCDs exceed in DR as well as in their scope of
IR/UV spectrum detection.

Obviously, with such impressive eye-candy is why most folks are so
easily mislead to think we're it, the one and only intelligent species
within this universe, and as such we're so often being fooled into
only detecting upon whatever's within the visual spectrum.

BTW, notice the 12-bit limited hue/color saturation and of having
easily including them pesky stars above Earth, and to notice Earth
isn't even the least bit over saturated, is it.
http://www.nasa.gov/vision/universe/...es/aurora.html
Too bad there's not the original 18 MB image files to look at, as
those images would be absolutely terrific.

I bet you and others of your silly infowar of eye-candy spewing kind
don't even get the drift of what this sort of Kodak DCS760 digital
camera dynamic range represents. Now try to imagine what a 16-bit CCD
camera w/o optical spectrum limitations would accomplish, or even by
their existing 12-bit if simply having allowed for greater FWC
saturation (meaning longer exposure and/or of a lower optical f-stop).
.. - Brad Guth


columbiaaccidentinvestigation wrote:
On Feb 22, 8:52 pm, BradGuth wrote:" Another
interesting second or third hand rant, even though you are intent upon
playing word games, rather than offering us science that can be peer
replicated as to what I had previously specified."

laughing your quoting numbers like "spec. candy", as if you really
understood what the numbers mean, much less that the ccd is but one
component of the digital imaging system. Ok lets try it this way
humans are either going to view the image in the print form or on a
monitor, both of which have their own dynamic ranges from black (min),
to a white (max), that is independent of the ccd's dynamic range. Now
venturing into the area of human color perception is necessary if
someone wants to reproduce what the human "sees", as the sequence goes
from humans capturing the image, image processing and finally viewing
of the image on paper or on a monitor. Now even if you take humans out
of the image capture part of the equation, you still have to have the
image processing for viewing, meaning the ccd's capabilities are just
part of the system, just like the silver halide crystals on film are
but one part of the sequence that reproduces what the "eye sees".
So I think its funny that you seem to be stuck on comparing a devices
capabilities against the human visual system without understanding the
latter So actually I did answer your question, you just did not
understand the answer, nor how it directly relates to your constant
ranting about dynamic range, that is why you go from regurgitating
numbers of a chips specifications, to questioning the results of the
image, when you really don't know what is going on in the optics of
the device or the human visual system.
The human visual system specifically analyzes and compares the colors
of the electromagnetic spectrum with photo pigment responses (short
medium and long cones, with contributions from the rods) spanning a
range of about 1.3 electron volts of energy covering from reds to
blues, and through primary and secondary comparisons our brain
perceives what Newton termed extra-spectral hues, completing a
connection of low energy reds to the high energy blues. The concept
of human color perception is not just describing eye candy, as the red/
green and yellow/blue opposition responses in the human eye are what
make up the cie Lab color space axis where ALL the data from the ccd
image is mapped. Cie lab space is a 3d Cartesian plane based on the
results of experiments that studied how the human visual system
responds to stimuli. Luminosity is represented on the z axis going
from absolute black to white (0 to 100), with red/green yellow/blue
opponency represented on the x, y axis +/-100 in either direction: the
x axis is lower case a, going from greenish (-a), toward being reddish
(+a) (red vs. green), the y axis is the lower case b and goes from
bluish (-b) towards yellowish (+b) yellow vs. blue. Now given all the
data captured by any ccd has to be mapped to this coordinate system,
and given the fact that same system is based on the human visual
system you need to learn a lot more because cielab space itself does
not contain all of the colors the human eye can detect, i.e. its
missing some greens and extra spectral hues (see lab gamut display
brucelindbloom.com) The Most expensive professional digital cameras on
the market do not allow the image to be mapped to custom color spaces
or color profiles but instead utilize srgb, or adobergb, which does
not encompass all the volume of the cielab space, resulting in an even
smaller range of colors than what the human "eye sees". (On a side
note possibly the reason for colormatch being the profile for the
messenger mercury probe images, is simply because even though
colormatch color space is small, no data is lost in image transfers
after the image has been mapped to that particular color space) . The
capacity of a ccd can be reduced to specific wavelength ranges by
manner of filter wheels, or template overlaying on the ccd, but you
are still counting photons in bins, which as I previously stated is
different from how human color perception is achieved through relative
comparisons of photo pigment responses (CIE 1931 XYZ color matching
functions see nasa ames color research lab), and comparisons of those
comparisons.
One of the unique aspects of the human visual system is that it
attempts to preserve an objects color even in lighting conditions that
change from approx 10,000 Kelvin's (midday bluish), to about 2700
Kelvin's (sunset and sunrise reddish). The illuminant whether it is
natural or artificial will have a spectral power distribution that can
be represented by a tristimulus value, which will be the "white" in
our field of view. The tristimulus of the illuminant can be
represented in chromacity diagrams, showing the white point of that
illuminant, 5000 Kelvin's = d50, and 6500 Kelvin's = d65, indicating
the warmness or coolness of the particular white. Now the white point
of the color space that the ccd image is mapped to is set by the color
profile itself (see brucelindbloom.com for profiles and info), and
therefore even though digital imaging devices allow the user to set
the color temp, and white balance, is still is using discrete
settings, which will then be mapped based on the color spaces white
point, which is much different from how the human eye adapts to
changing light conditions.
An image captured on film or on ccd, is metered so the energy received
over a period of time does not overexpose the best detail in the
desired subject, which is accomplished by adjusting the exposure time
or the lens iris diameter (f/stop), based on the films speed or
equivalent digital settings. No imaging system is perfect, as
optically there are trade offs, smaller lens iris diameters yields
larger depth of field, but less resolution and longer exposure times,
while larger lens iris diameter settings yield faster exposure times
and high resolutions due to the greater amount of light being
received, but at the expense of the images depth of field. All of
these components whether they be manually set, or auto metered,
determine what objects will be the shadows, highlights, and mid tone
ranges of the image, (meaning those variables set the "blackest black"
minimum luminosity or "whitest white" maximum luminosity in the image.
Therefore an images range from maximum to minimum luminosity is not a
function of the ccd or films range alone, but the amount of light
received from the objects being viewed based on a number of variables
(lens, f/stop, film speed/ccd specs, exposure time, desired zones,
developing times for film, and print/image manipulation. Please see
"zone system", and you will find that any image is a balance of
capturing the subject's details in zone V, while at the same time not
blowing out the details of the shadows zones III,II and highlights
zones VII,VIII which is much different than the specs of the film or
ccd (see luminous-landscape.com below for the simplified zone system
description). Ok now, the whitest or most luminous object will be
mapped as the highest point on the cielab z axis for that image, (with
the slight biases introduced from the tristumulus value), the black or
least luminous object will be the lowest point on the z axis with some
biases, where the dynamic range is the difference of luminosities,
minimum to maximum, and the logarithmic relationship from one to the
other is the gamma (the connecting grey values in between). Printer's
dynamic ranges are determined by the ink/media relationship or how
much ink can be placed on the media usually making a CMY black,
without running bleeding or buckling. A specific profile for printing
is not just unique to that device, but is unique to that paper and ink
set as well and requires setting the ink limits (described above) and
is then followed by the careful balancing the colors and grays, that
make the full color range or palette the printer can produce, meaning
the images produced from a printer have a dynamic range that is a
function of the inks and paper, and not the ccd. A monitors dynamic
range is determined by the quality of the blackness of the screen, as
compared to the best balanced and whitest white that can be achieved
from the phosphor emissions, but the problem is a monitors phosphors
change over time, meaning that the dynamic range is, dynamic, no pun
intended, but that's the problem with monitors and keeping them
calibrated. An images dynamic range on film can be determined by
measuring the differences in the film densities in the most dense
region (least luminous) setting the minimum, compared to the least
dense (most luminous) setting the luminosity maximum, on a
densitometer, which will show the final dynamic range is much less
than the film is capable of producing, and was not just determined by
the films range, but by the specifics that adjusted the exposure
setting to the lighting conditions when the image was taken. Which
once again shows that the human perception and color constancy are
pretty unique attributes of human adaptation when were are compared to
a device like a ccd or material like film.
Now the greater amount of bits, the more information, but that
information does not increase the dynamic range, it only parses the
grey scale in between the minimum and maximum into finer sections,
resulting in slight differences in the shadows, and mid-tones
especially, but the same cielab space is utilized with the same
limitations, the differences between 8 an 16 bit images are the 16 bit
data is just parsed a little better resulting in smoother transitions,
meaning 16 bit images aint all that you make them out to be. So
therefore analyzing colors (or what you want to imply is missing) from
an image strictly based on the ccd's or films specifications alone is
not logical, and will yield incorrect results because that analysis
does not the completely take into account system involved with
producing the image.
So brad, yes ccd's have great capabilities, but humans capturing the
image can describe the object with words in such a way that
compliments what the ccd produces, as humans are part of the viewing
end of the equation, so therefore they should be on the image
capturing end to better qualify the observed phenomena. Therefore
once again it is human nature to creatively/subjectively describe
events and sights (an observed events colors) with words that present
a feeling to the reader that is far beyond the characters composing
the text, and that is why humans must be part of space travel...

Color Research Lab NASA Ames Research Center
http://colorusage.arc.nasa.gov/lum_and_chrom.php


Rochester Institute of Technolgy
Munsell Color Institute
http://www.cis.rit.edu/mcsl/

Information on color spaces, color conversions, etc.
Bruce Lindbloom's website
http://www.brucelindbloom.com/

Simplified Zone System
http://www.luminous-landscape.com/tu...e_system.shtml

Exposure value calculations
The Science of Photography
http://johnlind.tripod.com/science/scienceexposure.html

  #53  
Old February 24th 08, 02:47 AM posted to sci.space.shuttle,sci.space.history,sci.space.policy,sci.space.station
columbiaaccidentinvestigation
external usenet poster
 
Posts: 1,344
Default Great missions STS-122 & Expedition 16

On Feb 23, 4:59 pm, BradGuth wrote:"My goodness,
aren't we chuck full of damage-control infomercial"

but In a previous thread brad stated the following
"Thanks to our "no kid left behind" policy, as of prior to CCD camera
imaging perhaps all of 0.1% of Americans even understood what
photographic spectrum sensitivity and the associated DR(dynamic range)
of B&W or color film even meant. Since the advent of commercial/
consumer CCD cameras and the continued dumbing down of America, I'd
say that fewer than 0.0001% (that's one out of a million)"
http://groups.google.com/group/sci.s...4b4bd03e8c55aa

And he also stated the following
"If any of this PhotoShop or whatever digital photographic software
usage is simply too much for your eye-candy speed or naysay mindset,
then perhaps you should not even be posting anywhere within Usenet
science, or contributing into most any other public space/astronomy or
astrophysics related forums, especially since so many of you folks
seem to lack the most basic of digital image observationology skills."
http://groups.google.com/group/sci.s...9f67ca6b5be403


Well I just demonstrated to you dynamic range, human color perception,
along with ciecolor space, with citations showing the science backing
my words, and you cannot handle that fact! So im laughing at you brad,
because that means you did not take the time to understand or
comprehend the information i presented, which at this point means you
don't even have any intention of expanding beyond your "spec candy"
descriptions, and therefore you have no place to criticize anybody's
observational skills, or their understanding of ccd imaging as I have
shown you have a lot to learn, and maybe you should take your own
advice and "not even be posting anywhere within Usenet science"...

laughing your quoting numbers like "spec. candy", as if you really
understood what the numbers mean, much less that the ccd is but one
component of the digital imaging system. Ok lets try it this way
humans are either going to view the image in the print form or on a
monitor, both of which have their own dynamic ranges from black (min),
to a white (max), that is independent of the ccd's dynamic range. Now
venturing into the area of human color perception is necessary if
someone wants to reproduce what the human "sees", as the sequence goes
from humans capturing the image, image processing and finally viewing
of the image on paper or on a monitor. Now even if you take humans out
of the image capture part of the equation, you still have to have the
image processing for viewing, meaning the ccd's capabilities are just
part of the system, just like the silver halide crystals on film are
but one part of the sequence that reproduces what the "eye sees".
So I think its funny that you seem to be stuck on comparing a devices
capabilities against the human visual system without understanding the
latter So actually I did answer your question, you just did not
understand the answer, nor how it directly relates to your constant
ranting about dynamic range, that is why you go from regurgitating
numbers of a chips specifications, to questioning the results of the
image, when you really don't know what is going on in the optics of
the device or the human visual system.
The human visual system specifically analyzes and compares the colors
of the electromagnetic spectrum with photo pigment responses (short
medium and long cones, with contributions from the rods) spanning a
range of about 1.3 electron volts of energy covering from reds to
blues, and through primary and secondary comparisons our brain
perceives what Newton termed extra-spectral hues, completing a
connection of low energy reds to the high energy blues. The concept
of human color perception is not just describing eye candy, as the red/
green and yellow/blue opposition responses in the human eye are what
make up the cie Lab color space axis where ALL the data from the ccd
image is mapped. Cie lab space is a 3d Cartesian plane based on the
results of experiments that studied how the human visual system
responds to stimuli. Luminosity is represented on the z axis going
from absolute black to white (0 to 100), with red/green yellow/blue
opponency represented on the x, y axis +/-100 in either direction: the
x axis is lower case a, going from greenish (-a), toward being reddish
(+a) (red vs. green), the y axis is the lower case b and goes from
bluish (-b) towards yellowish (+b) yellow vs. blue. Now given all the
data captured by any ccd has to be mapped to this coordinate system,
and given the fact that same system is based on the human visual
system you need to learn a lot more because cielab space itself does
not contain all of the colors the human eye can detect, i.e. its
missing some greens and extra spectral hues (see lab gamut display
brucelindbloom.com) The Most expensive professional digital cameras on
the market do not allow the image to be mapped to custom color spaces
or color profiles but instead utilize srgb, or adobergb, which does
not encompass all the volume of the cielab space, resulting in an even
smaller range of colors than what the human "eye sees". (On a side
note possibly the reason for colormatch being the profile for the
messenger mercury probe images, is simply because even though
colormatch color space is small, no data is lost in image transfers
after the image has been mapped to that particular color space) . The
capacity of a ccd can be reduced to specific wavelength ranges by
manner of filter wheels, or template overlaying on the ccd, but you
are still counting photons in bins, which as I previously stated is
different from how human color perception is achieved through relative
comparisons of photo pigment responses (CIE 1931 XYZ color matching
functions see nasa ames color research lab), and comparisons of those
comparisons.
One of the unique aspects of the human visual system is that it
attempts to preserve an objects color even in lighting conditions that
change from approx 10,000 Kelvin's (midday bluish), to about 2700
Kelvin's (sunset and sunrise reddish). The illuminant whether it is
natural or artificial will have a spectral power distribution that can
be represented by a tristimulus value, which will be the "white" in
our field of view. The tristimulus of the illuminant can be
represented in chromacity diagrams, showing the white point of that
illuminant, 5000 Kelvin's = d50, and 6500 Kelvin's = d65, indicating
the warmness or coolness of the particular white. Now the white point
of the color space that the ccd image is mapped to is set by the color
profile itself (see brucelindbloom.com for profiles and info), and
therefore even though digital imaging devices allow the user to set
the color temp, and white balance, is still is using discrete
settings, which will then be mapped based on the color spaces white
point, which is much different from how the human eye adapts to
changing light conditions.
An image captured on film or on ccd, is metered so the energy received
over a period of time does not overexpose the best detail in the
desired subject, which is accomplished by adjusting the exposure time
or the lens iris diameter (f/stop), based on the films speed or
equivalent digital settings. No imaging system is perfect, as
optically there are trade offs, smaller lens iris diameters yields
larger depth of field, but less resolution and longer exposure times,
while larger lens iris diameter settings yield faster exposure times
and high resolutions due to the greater amount of light being
received, but at the expense of the images depth of field. All of
these components whether they be manually set, or auto metered,
determine what objects will be the shadows, highlights, and mid tone
ranges of the image, (meaning those variables set the "blackest black"
minimum luminosity or "whitest white" maximum luminosity in the image.
Therefore an images range from maximum to minimum luminosity is not a
function of the ccd or films range alone, but the amount of light
received from the objects being viewed based on a number of variables
(lens, f/stop, film speed/ccd specs, exposure time, desired zones,
developing times for film, and print/image manipulation. Please see
"zone system", and you will find that any image is a balance of
capturing the subject's details in zone V, while at the same time not
blowing out the details of the shadows zones III,II and highlights
zones VII,VIII which is much different than the specs of the film or
ccd (see luminous-landscape.com below for the simplified zone system
description). Ok now, the whitest or most luminous object will be
mapped as the highest point on the cielab z axis for that image, (with
the slight biases introduced from the tristumulus value), the black or
least luminous object will be the lowest point on the z axis with some
biases, where the dynamic range is the difference of luminosities,
minimum to maximum, and the logarithmic relationship from one to the
other is the gamma (the connecting grey values in between). Printer's
dynamic ranges are determined by the ink/media relationship or how
much ink can be placed on the media usually making a CMY black,
without running bleeding or buckling. A specific profile for printing
is not just unique to that device, but is unique to that paper and ink
set as well and requires setting the ink limits (described above) and
is then followed by the careful balancing the colors and grays, that
make the full color range or palette the printer can produce, meaning
the images produced from a printer have a dynamic range that is a
function of the inks and paper, and not the ccd. A monitors dynamic
range is determined by the quality of the blackness of the screen, as
compared to the best balanced and whitest white that can be achieved
from the phosphor emissions, but the problem is a monitors phosphors
change over time, meaning that the dynamic range is, dynamic, no pun
intended, but that's the problem with monitors and keeping them
calibrated. An images dynamic range on film can be determined by
measuring the differences in the film densities in the most dense
region (least luminous) setting the minimum, compared to the least
dense (most luminous) setting the luminosity maximum, on a
densitometer, which will show the final dynamic range is much less
than the film is capable of producing, and was not just determined by
the films range, but by the specifics that adjusted the exposure
setting to the lighting conditions when the image was taken. Which
once again shows that the human perception and color constancy are
pretty unique attributes of human adaptation when were are compared to
a device like a ccd or material like film.
Now the greater amount of bits, the more information, but that
information does not increase the dynamic range, it only parses the
grey scale in between the minimum and maximum into finer sections,
resulting in slight differences in the shadows, and mid-tones
especially, but the same cielab space is utilized with the same
limitations, the differences between 8 an 16 bit images are the 16 bit
data is just parsed a little better resulting in smoother transitions,
meaning 16 bit images aint all that you make them out to be. So
therefore analyzing colors (or what you want to imply is missing) from
an image strictly based on the ccd's or films specifications alone is
not logical, and will yield incorrect results because that analysis
does not the completely take into account system involved with
producing the image.
So brad, yes ccd's have great capabilities, but humans capturing the
image can describe the object with words in such a way that
compliments what the ccd produces, as humans are part of the viewing
end of the equation, so therefore they should be on the image
capturing end to better qualify the observed phenomena. Therefore
once again it is human nature to creatively/subjectively describe
events and sights (an observed events colors) with words that present
a feeling to the reader that is far beyond the characters composing
the text, and that is why humans must be part of space travel...

Color Research Lab NASA Ames Research Center
http://colorusage.arc.nasa.gov/lum_and_chrom.php


Rochester Institute of Technolgy
Munsell Color Institute
http://www.cis.rit.edu/mcsl/

Information on color spaces, color conversions, etc.
Bruce Lindbloom's website
http://www.brucelindbloom.com/

Simplified Zone System
http://www.luminous-landscape.com/tu...e_system.shtml

Exposure value calculations
The Science of Photography
http://johnlind.tripod.com/science/scienceexposure.html
  #54  
Old February 24th 08, 07:03 AM posted to sci.space.shuttle,sci.space.history,sci.space.policy,sci.space.station
BradGuth
external usenet poster
 
Posts: 21,544
Default Great missions STS-122 & Expedition 16

On Feb 23, 5:47 pm, columbiaaccidentinvestigation
wrote:
On Feb 23, 4:59 pm, BradGuth wrote:"My goodness,
aren't we chuck full of damage-control infomercial"

but In a previous thread brad stated the following
"Thanks to our "no kid left behind" policy, as of prior to CCD camera
imaging perhaps all of 0.1% of Americans even understood what
photographic spectrum sensitivity and the associated DR(dynamic range)
of B&W or color film even meant. Since the advent of commercial/
consumer CCD cameras and the continued dumbing down of America, I'd
say that fewer than 0.0001% (that's one out of a million)"http://groups.google.com/group/sci.space.history/msg/af4b4bd03e8c55aa

And he also stated the following
"If any of this PhotoShop or whatever digital photographic software
usage is simply too much for your eye-candy speed or naysay mindset,
then perhaps you should not even be posting anywhere within Usenet
science, or contributing into most any other public space/astronomy or
astrophysics related forums, especially since so many of you folks
seem to lack the most basic of digital image observationology skills."http://groups.google.com/group/sci.space.history/msg/e99f67ca6b5be403

Well I just demonstrated to you dynamic range, human color perception,
along with ciecolor space, with citations showing the science backing
my words, and you cannot handle that fact! So im laughing at you brad,
because that means you did not take the time to understand or
comprehend the information i presented, which at this point means you
don't even have any intention of expanding beyond your "spec candy"
descriptions, and therefore you have no place to criticize anybody's
observational skills, or their understanding of ccd imaging as I have
shown you have a lot to learn, and maybe you should take your own
advice and "not even be posting anywhere within Usenet science"...

laughing your quoting numbers like "spec. candy", as if you really
understood what the numbers mean, much less that the ccd is but one
component of the digital imaging system. Ok lets try it this way
humans are either going to view the image in the print form or on a
monitor, both of which have their own dynamic ranges from black (min),
to a white (max), that is independent of the ccd's dynamic range. Now
venturing into the area of human color perception is necessary if
someone wants to reproduce what the human "sees", as the sequence goes
from humans capturing the image, image processing and finally viewing
of the image on paper or on a monitor. Now even if you take humans out
of the image capture part of the equation, you still have to have the
image processing for viewing, meaning the ccd's capabilities are just
part of the system, just like the silver halide crystals on film are
but one part of the sequence that reproduces what the "eye sees".
So I think its funny that you seem to be stuck on comparing a devices
capabilities against the human visual system without understanding the
latter So actually I did answer your question, you just did not
understand the answer, nor how it directly relates to your constant
ranting about dynamic range, that is why you go from regurgitating
numbers of a chips specifications, to questioning the results of the
image, when you really don't know what is going on in the optics of
the device or the human visual system.
The human visual system specifically analyzes and compares the colors
of the electromagnetic spectrum with photo pigment responses (short
medium and long cones, with contributions from the rods) spanning a
range of about 1.3 electron volts of energy covering from reds to
blues, and through primary and secondary comparisons our brain
perceives what Newton termed extra-spectral hues, completing a
connection of low energy reds to the high energy blues. The concept
of human color perception is not just describing eye candy, as the red/
green and yellow/blue opposition responses in the human eye are what
make up the cie Lab color space axis where ALL the data from the ccd
image is mapped. Cie lab space is a 3d Cartesian plane based on the
results of experiments that studied how the human visual system
responds to stimuli. Luminosity is represented on the z axis going
from absolute black to white (0 to 100), with red/green yellow/blue
opponency represented on the x, y axis +/-100 in either direction: the
x axis is lower case a, going from greenish (-a), toward being reddish
(+a) (red vs. green), the y axis is the lower case b and goes from
bluish (-b) towards yellowish (+b) yellow vs. blue. Now given all the
data captured by any ccd has to be mapped to this coordinate system,
and given the fact that same system is based on the human visual
system you need to learn a lot more because cielab space itself does
not contain all of the colors the human eye can detect, i.e. its
missing some greens and extra spectral hues (see lab gamut display
brucelindbloom.com) The Most expensive professional digital cameras on
the market do not allow the image to be mapped to custom color spaces
or color profiles but instead utilize srgb, or adobergb, which does
not encompass all the volume of the cielab space, resulting in an even
smaller range of colors than what the human "eye sees". (On a side
note possibly the reason for colormatch being the profile for the
messenger mercury probe images, is simply because even though
colormatch color space is small, no data is lost in image transfers
after the image has been mapped to that particular color space) . The
capacity of a ccd can be reduced to specific wavelength ranges by
manner of filter wheels, or template overlaying on the ccd, but you
are still counting photons in bins, which as I previously stated is
different from how human color perception is achieved through relative
comparisons of photo pigment responses (CIE 1931 XYZ color matching
functions see nasa ames color research lab), and comparisons of those
comparisons.
One of the unique aspects of the human visual system is that it
attempts to preserve an objects color even in lighting conditions that
change from approx 10,000 Kelvin's (midday bluish), to about 2700
Kelvin's (sunset and sunrise reddish). The illuminant whether it is
natural or artificial will have a spectral power distribution that can
be represented by a tristimulus value, which will be the "white" in
our field of view. The tristimulus of the illuminant can be
represented in chromacity diagrams, showing the white point of that
illuminant, 5000 Kelvin's = d50, and 6500 Kelvin's = d65, indicating
the warmness or coolness of the particular white. Now the white point
of the color space that the ccd image is mapped to is set by the color
profile itself (see brucelindbloom.com for profiles and info), and
therefore even though digital imaging devices allow the user to set
the color temp, and white balance, is still is using discrete
settings, which will then be mapped based on the color spaces white
point, which is much different from how the human eye adapts to
changing light conditions.
An image captured on film or on ccd, is metered so the energy received
over a period of time does not overexpose the best detail in the
desired subject, which is accomplished by adjusting the exposure time
or the lens iris diameter (f/stop), based on the films speed or
equivalent digital settings. No imaging system is perfect, as
optically there are trade offs, smaller lens iris diameters yields
larger depth of field, but less resolution and longer exposure times,
while larger lens iris diameter settings yield faster exposure times
and high resolutions due to the greater amount of light being
received, but at the expense of the images depth of field. All of
these components whether they be manually set, or auto metered,
determine what objects will be the shadows, highlights, and mid tone
ranges of the image, (meaning those variables set the "blackest black"
minimum luminosity or "whitest white" maximum luminosity in the image.
Therefore an images range from maximum to minimum luminosity is not a
function of the ccd or films range alone, but the amount of light
received from the objects being viewed based on a number of variables
(lens, f/stop, film speed/ccd specs, exposure time, desired zones,
developing times for film, and print/image manipulation. Please see
"zone system", and you will find that any image is a balance of
capturing the subject's details in zone V, while at the same time not
blowing out the details of the shadows zones III,II and highlights
zones VII,VIII which is much different than the specs of the film or
ccd (see luminous-landscape.com below for the simplified zone system
description). Ok now, the whitest or most luminous object will be
mapped as the highest point on the cielab z axis for that image, (with
the slight biases introduced from the tristumulus value), the black or
least luminous object will be the lowest point on the z axis with some
biases, where the dynamic range is the difference of luminosities,
minimum to maximum, and the logarithmic relationship from one to the
other is the gamma (the connecting grey values in between). Printer's
dynamic ranges are determined by the ink/media relationship or how
much ink can be placed on the media usually making a CMY black,
without running bleeding or buckling. A specific profile for printing
is not just unique to that device, but is unique to that paper and ink
set as well and requires setting the ink limits (described above) and
is then followed by the careful balancing the colors and grays, that
make the full color range or palette the printer can produce, meaning
the images produced from a printer have a dynamic range that is a
function of the inks and paper, and not the ccd. A monitors dynamic
range is determined by the quality of the blackness of the screen, as
compared to the best balanced and whitest white that can be achieved
from the phosphor emissions, but the problem is a monitors phosphors
change over time, meaning that ...

read more »


Don't need to "read more", because if that's what makes you a happy
camper, then so be it, and remember that it took the sorts of brown-
nosed minions exactly like yourself to put a smile on Hitler's face,
as well as a smirk on our resident LLPOF warlord's face. Secondly,
without your kind, there simply couldn't be the dark side, whereas by
your mindset we'd only have those gray pastel colors of our moon and
Mercury, as forever limited to your skewed mindset that's all
mainstream status quo hype and of no real science other than Old
Testament approved.

Your pretending at being smart and yet playing along as otherwise dumb
and dumber simply doesn't cut it, but then your being a pretend
atheist is not exactly much better off.

Do you teach physics ans science, perhaps 5th grade?
. - Brad Guth
  #55  
Old February 24th 08, 09:04 AM posted to sci.space.shuttle,sci.space.history,sci.space.policy,sci.space.station
columbiaaccidentinvestigation
external usenet poster
 
Posts: 1,344
Default Great missions STS-122 & Expedition 16

On Feb 23, 10:03 pm, BradGuth wrote:" Don't need
to "read more", because if that's what makes you a happy camper... Do
you teach physics ans science, perhaps 5th grade?. - Brad Guth"

Laughing, that means i just schooled you in the concepts of dynamic
range, human color perception, etc. which exceeded your comprehension
abilities in one sitting and you had to stop reading, but you even
though you have not fully understood how I answered your questions,
you still felt a need to make a stupid comeback post just because you
are in love with your words and you needed to make yourself feel
better with some ego boosting trash, (which means you shouldn't have
bothered to even post)


http://www.nasa.gov/mission_pages/st...4_feature.html
"Mt. Etna Eruption
ISS005-E-19024 --- "Photography in space helped bring out the artistic
side in me," said Commander Leroy Chiao of Expedition 10, who snapped
more than 24,000 photos from space. "The beauty of the Earth was very
inspiring, and I tried to find new ways to capture and express that
beauty." The three-member crew of the Expedition Five mission onboard
the International Space Station was able to observe Mt. Etna's
spectacular eruption, and photograph the details of the eruption plume
and smoke from fires triggered by the lava as it flowed down the
11,000 ft mountain. Image credit: NASA"


  #56  
Old February 24th 08, 04:27 PM posted to sci.space.shuttle,sci.space.history,sci.space.policy,sci.space.station
BradGuth
external usenet poster
 
Posts: 21,544
Default Great missions STS-122 & Expedition 16

On Feb 24, 12:04 am, columbiaaccidentinvestigation
wrote:
On Feb 23, 10:03 pm, BradGuth wrote:" Don't need
to "read more", because if that's what makes you a happy camper... Do
you teach physics ans science, perhaps 5th grade?. - Brad Guth"

Laughing, that means i just schooled you in the concepts of dynamic
range, human color perception, etc. which exceeded your comprehension
abilities in one sitting and you had to stop reading, but you even
though you have not fully understood how I answered your questions,
you still felt a need to make a stupid comeback post just because you
are in love with your words and you needed to make yourself feel
better with some ego boosting trash, (which means you shouldn't have
bothered to even post)

http://www.nasa.gov/mission_pages/st...4_feature.html
"Mt. Etna Eruption
ISS005-E-19024 --- "Photography in space helped bring out the artistic
side in me," said Commander Leroy Chiao of Expedition 10, who snapped
more than 24,000 photos from space. "The beauty of the Earth was very
inspiring, and I tried to find new ways to capture and express that
beauty." The three-member crew of the Expedition Five mission onboard
the International Space Station was able to observe Mt. Etna's
spectacular eruption, and photograph the details of the eruption plume
and smoke from fires triggered by the lava as it flowed down the
11,000 ft mountain. Image credit: NASA"


Oddly, NASA/Apollo moon was extensively 0.65~0.75 albedo reflective,
because those moon suits were worth an albedo of 0.85, and everything
getting xenon lamp spectrum illuminated to boot, because there's
nothing bluish about our NASA/Apollo unfiltered Kodak moments, and
strangely Venus is never anywhere in sight.

Why are you so unable or unwilling to deal with the truth of
whatever's off-world?

There's so much more to space than mere eye candy. There's actual
science that's easily peer replicated, of photographic science telling
us about the given geology and mineralogy of places and of interesting
things other than Earth.

Obviously, with such impressive eye-candy is why most folks are so
easily mislead to think we're it, the one and only intelligent species
within this universe, and as such we're so often being fooled into
only detecting upon whatever's within the visual spectrum, when our
eyes can't even detect 1% of what a good CCD camera has to offer,
especially if outfitted with mirror optics.

BTW, notice the 12-bit limited hue/color saturation and of having
easily included them pesky stars above Earth, and to notice Earth (3
fold better reflective than Mercury and otherwise half as reflective
as Venus) isn't even the least bit over saturated, is it.
http://www.nasa.gov/vision/universe/...es/aurora.html
Too bad there's not the original 18 MB image files (public owned) to
look at, as those images would be absolutely terrific examples of what
CCD dynamic range has to offer.

I bet you and others of your silly infowar of eye-candy spewing kind
don't even get the basic drift of what this sort of Kodak DCS760
digital camera dynamic range represents. Now try to imagine what a 16-
bit CCD camera w/o optical spectrum limitations would have
accomplished, or even by their existing 12-bit if simply having
allowed for greater FWC saturation (meaning longer exposure and/or of
a lower optical f-stop). In fact, of a unfiltered Kodak film and its
optical DR that's of merely 9-bit is more than good enough for
recording the vibrant likes of Venus along with our moon or Earth
within the exact same FOV(frame of view).
.. - Brad Guth
  #57  
Old February 24th 08, 05:05 PM posted to sci.space.shuttle,sci.space.history,sci.space.policy,sci.space.station
columbiaaccidentinvestigation
external usenet poster
 
Posts: 1,344
Default Great missions STS-122 & Expedition 16

On Feb 24, 7:27 am, BradGuth wrote:" Oddly, NASA/
Apollo moon was extensively 0.65~0.75 albedo reflective, because
those moon suits were worth an albedo of 0.85, and everything getting
xenon lamp spectrum illuminated to boot, because there's nothing
bluish about our NASA/Apollo unfiltered Kodak moments, and strangely
Venus is never anywhere in sight. Why are you so unable or unwilling
to deal with the truth of whatever's off-world? There's so much more
to space than mere eye candy. There's actual
science that's easily peer replicated, of photographic science telling
us about the given geology and mineralogy of places and of interesting
things other than Earth."

Laughing, you just posted the same old trash, but in a slightly
repackaged form, but the problem is you have not shown any reduction
of you idiocy, or ignorance. In order for you to analyze the images
like you have, you need to not just know film type and speed, but the
lens used, and the f/stop, and resulting exposure time all of which
determine how white an astronauts space suit is compared to the back
round, midtones etc. Once again regurgitating numbers from what
"should have been" seen does not address the image itself, and so you
keep looking for things, but you clearly do not have an understanding
of what you are looking at, or how the image was produced. Now the
same principles that I have stated apply to the analysis of all
images, so your diatribes into what im not addressing are a joke, and
your demands for me to answer your loaded questions are an even bigger
joke, thanks for the Sunday morning laugh brad.... And no im not
"unwilling to deal with the truth of whatever's off-world" as you just
stated, but I do love to learn about earth, including from the unique
view the astronauts have aboard the ISS.

http://spaceflight.nasa.gov/gallery/...16e008436.html
International Space Station Imagery
"ISS016-E-008436 (26 Oct. 2007) --- Beirut Metropolitan Area, Lebanon
is featured in this image photographed by an Expedition 16 crewmember
on the International Space Station. The capital of Lebanon, Beirut is
located along the southeastern shoreline of the Mediterranean Sea.
According to geologists, the metropolitan area is built on a small
peninsula composed mainly of sedimentary rock deposited over the past
100 million years or so. The growth of the city eastwards is bounded
by foothills of the more mountainous interior of Lebanon (sparsely
settled greenish brown region visible at upper right). While this
sedimentary platform is stable, the country of Lebanon is located
along a major transform fault zone, or region where the African and
Arabian tectonic plates are moving laterally in relation to (and
against) each other. This active tectonism creates an earthquake
hazard for the country. The Roum Fault, one of the fault strands that
is part of the transform boundary, is located directly to the south of
the Beirut metropolitan area. Other distinctive features visible in
this image include the Rafic Hariri Airport at lower right, the city
sports arena at center, and several areas of green and open space
(such a large golf course at center). Also visible in the image are
several plumes of sediment along the coastline -- the most striking of
which are located near the airport. The general lack of vegetation in
the airport may promote higher degrees of soil transport by surface
water runoff or wind."

  #58  
Old February 24th 08, 11:08 PM posted to sci.space.shuttle,sci.space.history,sci.space.policy,sci.space.station
BradGuth
external usenet poster
 
Posts: 21,544
Default Great missions STS-122 & Expedition 16

On Feb 24, 8:05 am, columbiaaccidentinvestigation
wrote:
On Feb 24, 7:27 am, BradGuth wrote:" Oddly, NASA/
Apollo moon was extensively 0.65~0.75 albedo reflective, because
those moon suits were worth an albedo of 0.85, and everything getting
xenon lamp spectrum illuminated to boot, because there's nothing
bluish about our NASA/Apollo unfiltered Kodak moments, and strangely
Venus is never anywhere in sight. Why are you so unable or unwilling
to deal with the truth of whatever's off-world? There's so much more
to space than mere eye candy. There's actual
science that's easily peer replicated, of photographic science telling
us about the given geology and mineralogy of places and of interesting
things other than Earth."

Laughing, you just posted the same old trash, but in a slightly
repackaged form, but the problem is you have not shown any reduction
of you idiocy, or ignorance. In order for you to analyze the images
like you have, you need to not just know film type and speed, but the
lens used, and the f/stop, and resulting exposure time all of which
determine how white an astronauts space suit is compared to the back
round, midtones etc. Once again regurgitating numbers from what
"should have been" seen does not address the image itself, and so you
keep looking for things, but you clearly do not have an understanding
of what you are looking at, or how the image was produced. Now the
same principles that I have stated apply to the analysis of all
images, so your diatribes into what im not addressing are a joke, and
your demands for me to answer your loaded questions are an even bigger
joke, thanks for the Sunday morning laugh brad.... And no im not
"unwilling to deal with the truth of whatever's off-world" as you just
stated, but I do love to learn about earth, including from the unique
view the astronauts have aboard the ISS.

http://spaceflight.nasa.gov/gallery/...-16/html/iss01...
International Space Station Imagery
"ISS016-E-008436 (26 Oct. 2007) --- Beirut Metropolitan Area, Lebanon
is featured in this image photographed by an Expedition 16 crewmember
on the International Space Station. The capital of Lebanon, Beirut is
located along the southeastern shoreline of the Mediterranean Sea.
According to geologists, the metropolitan area is built on a small
peninsula composed mainly of sedimentary rock deposited over the past
100 million years or so. The growth of the city eastwards is bounded
by foothills of the more mountainous interior of Lebanon (sparsely
settled greenish brown region visible at upper right). While this
sedimentary platform is stable, the country of Lebanon is located
along a major transform fault zone, or region where the African and
Arabian tectonic plates are moving laterally in relation to (and
against) each other. This active tectonism creates an earthquake
hazard for the country. The Roum Fault, one of the fault strands that
is part of the transform boundary, is located directly to the south of
the Beirut metropolitan area. Other distinctive features visible in
this image include the Rafic Hariri Airport at lower right, the city
sports arena at center, and several areas of green and open space
(such a large golf course at center). Also visible in the image are
several plumes of sediment along the coastline -- the most striking of
which are located near the airport. The general lack of vegetation in
the airport may promote higher degrees of soil transport by surface
water runoff or wind."


If that's what makes our Earth-only mindset puppet-masters like
yourself happy campers, then so be it. No wonder we're headed for
WWIII, $10/gallon and $1/kwhr just as fast as you folks and fellow
rusemasters of the Old Testament thumping kind can manage.

Keep pretending that all off-world matters simply do not matter, as
well as naysaying as to the ongoing demise of our frail environment at
the same time. After all, it's what your God(s) would appreciate more
than anything else.

BTW, did you go to your pretend atheists sunday school, and teach
those unfortunate kids how to lie their infomercial spewing little
butts off, by way of avoiding the truth via excluding science or
banishing related evidence that could otherwise rock your mainstream
status quo boat?
.. - Brad Guth
  #59  
Old February 24th 08, 11:30 PM posted to sci.space.shuttle,sci.space.history,sci.space.policy,sci.space.station
columbiaaccidentinvestigation
external usenet poster
 
Posts: 1,344
Default Great missions STS-122 & Expedition 16

On Feb 24, 2:08 pm, BradGuth wrote:
On Feb 24, 8:05 am, columbiaaccidentinvestigation





wrote:
On Feb 24, 7:27 am, BradGuth wrote:" Oddly, NASA/
Apollo moon was extensively 0.65~0.75 albedo reflective, because
those moon suits were worth an albedo of 0.85, and everything getting
xenon lamp spectrum illuminated to boot, because there's nothing
bluish about our NASA/Apollo unfiltered Kodak moments, and strangely
Venus is never anywhere in sight. Why are you so unable or unwilling
to deal with the truth of whatever's off-world? There's so much more
to space than mere eye candy. There's actual
science that's easily peer replicated, of photographic science telling
us about the given geology and mineralogy of places and of interesting
things other than Earth."


Laughing, you just posted the same old trash, but in a slightly
repackaged form, but the problem is you have not shown any reduction
of you idiocy, or ignorance. In order for you to analyze the images
like you have, you need to not just know film type and speed, but the
lens used, and the f/stop, and resulting exposure time all of which
determine how white an astronauts space suit is compared to the back
round, midtones etc. Once again regurgitating numbers from what
"should have been" seen does not address the image itself, and so you
keep looking for things, but you clearly do not have an understanding
of what you are looking at, or how the image was produced. Now the
same principles that I have stated apply to the analysis of all
images, so your diatribes into what im not addressing are a joke, and
your demands for me to answer your loaded questions are an even bigger
joke, thanks for the Sunday morning laugh brad.... And no im not
"unwilling to deal with the truth of whatever's off-world" as you just
stated, but I do love to learn about earth, including from the unique
view the astronauts have aboard the ISS.


http://spaceflight.nasa.gov/gallery/...-16/html/iss01...
International Space Station Imagery
"ISS016-E-008436 (26 Oct. 2007) --- Beirut Metropolitan Area, Lebanon
is featured in this image photographed by an Expedition 16 crewmember
on the International Space Station. The capital of Lebanon, Beirut is
located along the southeastern shoreline of the Mediterranean Sea.
According to geologists, the metropolitan area is built on a small
peninsula composed mainly of sedimentary rock deposited over the past
100 million years or so. The growth of the city eastwards is bounded
by foothills of the more mountainous interior of Lebanon (sparsely
settled greenish brown region visible at upper right). While this
sedimentary platform is stable, the country of Lebanon is located
along a major transform fault zone, or region where the African and
Arabian tectonic plates are moving laterally in relation to (and
against) each other. This active tectonism creates an earthquake
hazard for the country. The Roum Fault, one of the fault strands that
is part of the transform boundary, is located directly to the south of
the Beirut metropolitan area. Other distinctive features visible in
this image include the Rafic Hariri Airport at lower right, the city
sports arena at center, and several areas of green and open space
(such a large golf course at center). Also visible in the image are
several plumes of sediment along the coastline -- the most striking of
which are located near the airport. The general lack of vegetation in
the airport may promote higher degrees of soil transport by surface
water runoff or wind."


If that's what makes our Earth-only mindset puppet-masters like
yourself happy campers, then so be it. No wonder we're headed for
WWIII, $10/gallon and $1/kwhr just as fast as you folks and fellow
rusemasters of the Old Testament thumping kind can manage.

Keep pretending that all off-world matters simply do not matter, as
well as naysaying as to the ongoing demise of our frail environment at
the same time. After all, it's what your God(s) would appreciate more
than anything else.

BTW, did you go to your pretend atheists sunday school, and teach
those unfortunate kids how to lie their infomercial spewing little
butts off, by way of avoiding the truth via excluding science or
banishing related evidence that could otherwise rock your mainstream
status quo boat?
. - Brad Guth- Hide quoted text -

- Show quoted text -


so what you just admitted is that your argument in this thread (and
your image analysis skills for that matter) has been completely
reduced down to you making illogical attacks on me, that's pathetic
brad....

"http://spaceflight.nasa.gov/gallery/images/station/crew-16/html/
iss016e021564.html
International Space Station Imagery
ISS016-E-021564 (7 Jan. 2008) --- Paris, France is featured in this
image photographed by an Expedition 16 crewmember on the International
Space Station. A crisp, clear winter day over France provided a
detailed view of the city of Paris. This image shows the recognizable
street pattern of the city - and some of the world's most notable
landmarks - along the Seine River. One of the main avenues radiating
like spokes from the Arc de Triomphe (lower right) is the Avenue des
Champs-Elysees running southeast to the Garden of Tuileries (Jardin
des Tuileries). The garden -- recognizable by its light green color
relative to the surrounding built materials -- was originally
commissioned by Catherine de Medici in 1559, and is now bounded by the
Place de la Concorde to the northeast and the Louvre museum along the
Seine River at the southeast end. Other, similarly colored parks and
greenspaces are visible throughout the image. Farther south on the
Seine is the Ile de la Cite, location of the famous Notre Dame
cathedral. Perhaps most prominent is the characteristic "A" profile of
the Eiffel Tower west of the Jardin des Tuileries, highlighted by
morning sunlight"
  #60  
Old February 25th 08, 12:38 AM posted to sci.space.shuttle,sci.space.history,sci.space.policy,sci.space.station
BradGuth
external usenet poster
 
Posts: 21,544
Default Great missions STS-122 & Expedition 16

On Feb 24, 2:30 pm, columbiaaccidentinvestigation
wrote:
On Feb 24, 2:08 pm, BradGuth wrote:



On Feb 24, 8:05 am, columbiaaccidentinvestigation


wrote:
On Feb 24, 7:27 am, BradGuth wrote:" Oddly, NASA/
Apollo moon was extensively 0.65~0.75 albedo reflective, because
those moon suits were worth an albedo of 0.85, and everything getting
xenon lamp spectrum illuminated to boot, because there's nothing
bluish about our NASA/Apollo unfiltered Kodak moments, and strangely
Venus is never anywhere in sight. Why are you so unable or unwilling
to deal with the truth of whatever's off-world? There's so much more
to space than mere eye candy. There's actual
science that's easily peer replicated, of photographic science telling
us about the given geology and mineralogy of places and of interesting
things other than Earth."


Laughing, you just posted the same old trash, but in a slightly
repackaged form, but the problem is you have not shown any reduction
of you idiocy, or ignorance. In order for you to analyze the images
like you have, you need to not just know film type and speed, but the
lens used, and the f/stop, and resulting exposure time all of which
determine how white an astronauts space suit is compared to the back
round, midtones etc. Once again regurgitating numbers from what
"should have been" seen does not address the image itself, and so you
keep looking for things, but you clearly do not have an understanding
of what you are looking at, or how the image was produced. Now the
same principles that I have stated apply to the analysis of all
images, so your diatribes into what im not addressing are a joke, and
your demands for me to answer your loaded questions are an even bigger
joke, thanks for the Sunday morning laugh brad.... And no im not
"unwilling to deal with the truth of whatever's off-world" as you just
stated, but I do love to learn about earth, including from the unique
view the astronauts have aboard the ISS.


http://spaceflight.nasa.gov/gallery/...-16/html/iss01...
International Space Station Imagery
"ISS016-E-008436 (26 Oct. 2007) --- Beirut Metropolitan Area, Lebanon
is featured in this image photographed by an Expedition 16 crewmember
on the International Space Station. The capital of Lebanon, Beirut is
located along the southeastern shoreline of the Mediterranean Sea.
According to geologists, the metropolitan area is built on a small
peninsula composed mainly of sedimentary rock deposited over the past
100 million years or so. The growth of the city eastwards is bounded
by foothills of the more mountainous interior of Lebanon (sparsely
settled greenish brown region visible at upper right). While this
sedimentary platform is stable, the country of Lebanon is located
along a major transform fault zone, or region where the African and
Arabian tectonic plates are moving laterally in relation to (and
against) each other. This active tectonism creates an earthquake
hazard for the country. The Roum Fault, one of the fault strands that
is part of the transform boundary, is located directly to the south of
the Beirut metropolitan area. Other distinctive features visible in
this image include the Rafic Hariri Airport at lower right, the city
sports arena at center, and several areas of green and open space
(such a large golf course at center). Also visible in the image are
several plumes of sediment along the coastline -- the most striking of
which are located near the airport. The general lack of vegetation in
the airport may promote higher degrees of soil transport by surface
water runoff or wind."


If that's what makes our Earth-only mindset puppet-masters like
yourself happy campers, then so be it. No wonder we're headed for
WWIII, $10/gallon and $1/kwhr just as fast as you folks and fellow
rusemasters of the Old Testament thumping kind can manage.


Keep pretending that all off-world matters simply do not matter, as
well as naysaying as to the ongoing demise of our frail environment at
the same time. After all, it's what your God(s) would appreciate more
than anything else.


BTW, did you go to your pretend atheists sunday school, and teach
those unfortunate kids how to lie their infomercial spewing little
butts off, by way of avoiding the truth via excluding science or
banishing related evidence that could otherwise rock your mainstream
status quo boat?
. - Brad Guth- Hide quoted text -


- Show quoted text -


so what you just admitted is that your argument in this thread (and
your image analysis skills for that matter) has been completely
reduced down to you making illogical attacks on me, that's pathetic
brad....

"http://spaceflight.nasa.gov/gallery/images/station/crew-16/html/
iss016e021564.html
International Space Station Imagery
ISS016-E-021564 (7 Jan. 2008) --- Paris, France is featured in this
image photographed by an Expedition 16 crewmember on the International
Space Station. A crisp, clear winter day over France provided a
detailed view of the city of Paris. This image shows the recognizable
street pattern of the city - and some of the world's most notable
landmarks - along the Seine River. One of the main avenues radiating
like spokes from the Arc de Triomphe (lower right) is the Avenue des
Champs-Elysees running southeast to the Garden of Tuileries (Jardin
des Tuileries). The garden -- recognizable by its light green color
relative to the surrounding built materials -- was originally
commissioned by Catherine de Medici in 1559, and is now bounded by the
Place de la Concorde to the northeast and the Louvre museum along the
Seine River at the southeast end. Other, similarly colored parks and
greenspaces are visible throughout the image. Farther south on the
Seine is the Ile de la Cite, location of the famous Notre Dame
cathedral. Perhaps most prominent is the characteristic "A" profile of
the Eiffel Tower west of the Jardin des Tuileries, highlighted by
morning sunlight"


There's nothing pathetic about sharing the whole truth and nothing but
the truth. Apparently you've got a big problem with that, as much as
you can't tolerate honestly deductive thinking unless it's Old
Testament certified.

BTW, if Venus along with its unlimited local energy cache to burn
(sort of speak) isn't ET doable (including on behalf of us), then
perhaps no other planet in the universe is worthy of a viable habitat
or as a mineral resource. With all the MRSA, Stauff and numerous
hybrid forms of humanly lethal pestilence running amuck, not to
mention animal/plant extinctions and of even hybrid plant rot taking
place and mother nature going GW postal as we prepare ourselves for
WWIII, as such Earth is not exactly ET worthy, especially after
humanity has so terribly pillaged, raped and mostly burned off its
fossil fuels with no apparent regard for the future of having far less
dry land for 1e10 souls to survive upon.

Just for those of you that either can't or wouldn't dare think
independently within the box, much less deductively think outside,
here's a little something that's quite interesting, as getting the
peer reviewed benefit of the doubt.

Alex Collier / By Michael Salla, PhD
http://www.exopolitics.org
http://www.rense.com/general54/zlecx.htm
http://utenti.lycos.it/paolaharris/acollier_eng.htm
http://www.exopolitics.org/Exo-Comment-66.htm
plus many other links worth getting our undivided attention.

For those of you hell bent upon sticking with your terrestrial limited
God(s), never mind because, no matters what the evidence or physics
backing up the best available science, there's simply no hope for
those in charge of snookering humanity for all it's worth, or
otherwise simply self dumbfounded past the point of no return. In
other words, there's not much sense in beating a dead horse to death.
.. - Brad Guth
 




Thread Tools
Display Modes

Posting Rules
You may not post new threads
You may not post replies
You may not post attachments
You may not edit your posts

vB code is On
Smilies are On
[IMG] code is On
HTML code is Off
Forum Jump

Similar Threads
Thread Thread Starter Forum Replies Last Post
Great missions STS-122 & Expedition 16 columbiaaccidentinvestigation Space Shuttle 191 March 11th 08 11:39 PM
Expedition 14/Expedition 13/Ansari Farewells and Hatch Closure John Space Station 0 September 28th 06 09:58 PM
Expedition 13/ Pontes/ Expedition 12 Joint Crew News Conference John Space Station 0 April 4th 06 03:42 PM
Expedition 13/ Pontes/ Expedition 12 Joint Crew News Conference John Space Station 0 April 3rd 06 10:05 PM


All times are GMT +1. The time now is 09:26 AM.


Powered by vBulletin® Version 3.6.4
Copyright ©2000 - 2024, Jelsoft Enterprises Ltd.
Copyright ©2004-2024 SpaceBanter.com.
The comments are property of their posters.