Instantaneous vs. Time-Averaged Seeing
On Fri, 3 Dec 2004 19:02:23 +0000 (UTC), Pierre Vandevenne
wrote:
I have a very basic understanding of digital signal processing at the
mathematical level and what is achieved in terms of resolution is
apparently beyond what can be achieved iin the frameworkd of my limited
understanding.
I would appreciate immensely if someone specialized in signal processing
could explain how this works.
I can't explain the math, Pierre, but it makes sense to me in terms of the
amount of information collected. The mistake is in thinking that the
CCD's five available pixels will always record the same thing. If
multiple images record the same information over and over then nothing
will be gained by taking more images. If the camera and subject never
move so that the pixels always record the same information there is
nothing to be gained by combining the images. Average combinging them
will give the same result as a single image -- a single value with no
variance. But in reality there is motion of the image (as well as real
variance) so you aren't just recording the same information over and over
on the few pixels you have available to record the light from Titan.
You're taking a new sample of light coming from Titan each time, and each
sample is slightly different. The amount of information is increased not
by increasing the resolution of the instrument but by sampling multiple
times with the same instrument. If you're just recording the exact same
part of Titan on the same pixel each time (i.e., if there is zero movement
in the image) you're still getting a distribution of values from a sample
rather than a single value and that will increase the precision of the
measurement although it doesn't increase spatial resolution. But you're
not always going to get the same part of the image recorded on the same
pixel each time (especially in mediocre seeing) so you're getting a
spatial distribution as well. In a sense in one image you're sort of
seeing between the pixels of a previous image.
I used to do something similar with a manual 35mm SLR camera that used an
average behind-the-lens exposure meter. The meter was a single element
detector, giving a reading averaged over the entire scene, i.e., zero
spatial resolution across the scene. But by moving the camera around and
watching the reading change I could get an idea of the variations of light
in two dimensions -- based on multiple readings with a zero-resolution
instrument -- and estimate what exposure was necessary for a particular
image element within the scene. Thus I appeared to be exceeding the
theoretically resolving limit (zero) of the meter. But that limit applies
to a single reading.
All of this has similarities to interferometry -- basically the same idea
of collecting more information and combining it -- but while it's been
explained to me by someone at an interferometry facility my ability to
regurgitate it on demand is severely limited. The information was
collected (it made sense at the time) but scrambled in the poor "seeing"
of my brain, I guess.
And if this actually works, my next question will be, "why did those nasa
guy put their scope in space"?
There are larger telescopes on Earth and with adaptive optics and other
techniques they can record finer resolution than the telescopes in space.
But the ones in space don't have weather or daytime, aren't limited to a
small area of the field for the best resolution, don't require good seeing
for diffraction-limited observing like AO systems do, don't have skyglow
from the atmosphere (I suppose imaging through the gegenschein is
limitingg) and many other factors that make it harder to do these things
from Earth. They can take longer exposures and not have to take multiple
exposures and stack and combine the images (maybe something similar is
still done?). The information arriving at the telescope in space hasn't
been spread around to where you have to go chasing it and putting it back
in order.
Mike Simmons
|