![]() |
|
|
Thread Tools | Display Modes |
|
#1
|
|||
|
|||
![]()
On Fri, 3 Dec 2004 19:02:23 +0000 (UTC), Pierre Vandevenne
wrote: I have a very basic understanding of digital signal processing at the mathematical level and what is achieved in terms of resolution is apparently beyond what can be achieved iin the frameworkd of my limited understanding. I would appreciate immensely if someone specialized in signal processing could explain how this works. I can't explain the math, Pierre, but it makes sense to me in terms of the amount of information collected. The mistake is in thinking that the CCD's five available pixels will always record the same thing. If multiple images record the same information over and over then nothing will be gained by taking more images. If the camera and subject never move so that the pixels always record the same information there is nothing to be gained by combining the images. Average combinging them will give the same result as a single image -- a single value with no variance. But in reality there is motion of the image (as well as real variance) so you aren't just recording the same information over and over on the few pixels you have available to record the light from Titan. You're taking a new sample of light coming from Titan each time, and each sample is slightly different. The amount of information is increased not by increasing the resolution of the instrument but by sampling multiple times with the same instrument. If you're just recording the exact same part of Titan on the same pixel each time (i.e., if there is zero movement in the image) you're still getting a distribution of values from a sample rather than a single value and that will increase the precision of the measurement although it doesn't increase spatial resolution. But you're not always going to get the same part of the image recorded on the same pixel each time (especially in mediocre seeing) so you're getting a spatial distribution as well. In a sense in one image you're sort of seeing between the pixels of a previous image. I used to do something similar with a manual 35mm SLR camera that used an average behind-the-lens exposure meter. The meter was a single element detector, giving a reading averaged over the entire scene, i.e., zero spatial resolution across the scene. But by moving the camera around and watching the reading change I could get an idea of the variations of light in two dimensions -- based on multiple readings with a zero-resolution instrument -- and estimate what exposure was necessary for a particular image element within the scene. Thus I appeared to be exceeding the theoretically resolving limit (zero) of the meter. But that limit applies to a single reading. All of this has similarities to interferometry -- basically the same idea of collecting more information and combining it -- but while it's been explained to me by someone at an interferometry facility my ability to regurgitate it on demand is severely limited. The information was collected (it made sense at the time) but scrambled in the poor "seeing" of my brain, I guess. And if this actually works, my next question will be, "why did those nasa guy put their scope in space"? There are larger telescopes on Earth and with adaptive optics and other techniques they can record finer resolution than the telescopes in space. But the ones in space don't have weather or daytime, aren't limited to a small area of the field for the best resolution, don't require good seeing for diffraction-limited observing like AO systems do, don't have skyglow from the atmosphere (I suppose imaging through the gegenschein is limitingg) and many other factors that make it harder to do these things from Earth. They can take longer exposures and not have to take multiple exposures and stack and combine the images (maybe something similar is still done?). The information arriving at the telescope in space hasn't been spread around to where you have to go chasing it and putting it back in order. Mike Simmons |
#2
|
|||
|
|||
![]() Mike Simmons wrote: (snip of a good post) Mike, I always look forward to your posts on s.a.a. Knowledgeable and sensible. Ciao, Bill Meyers |
#3
|
|||
|
|||
![]()
Mike Simmons wrote in
news ![]() On Fri, 3 Dec 2004 19:02:23 +0000 (UTC), Pierre Vandevenne I can't explain the math, Pierre, but it makes sense to me in terms of the amount of information collected. The mistake is in thinking that Yes, intuitively I tend to 'feel' just as you feel. It is nice to think that one photon coming from here and one photon coming from there could be spatially reorganized to augment the effective resolution. But it may also simply be impossible. That's why I am looking for mathematically verifiable stuff... Just as wheels appearing to rotate backwards are totally counter-intuitive, but once you see the math, it is crystal clear. --- Pierre Vandevenne - DataRescue sa/nv - www.datarescue.com The IDA Pro Disassembler & Debugger - world leader in hostile code analysis PhotoRescue - advanced data recovery for digital photographic media latest review: http://www.pcmag.com/article2/0,1759,1590497,00.asp |
#4
|
|||
|
|||
![]()
Mike Simmons wrote:
On Fri, 3 Dec 2004 19:02:23 +0000 (UTC), Pierre Vandevenne wrote: I have a very basic understanding of digital signal processing at the mathematical level and what is achieved in terms of resolution is apparently beyond what can be achieved iin the frameworkd of my limited understanding. I would appreciate immensely if someone specialized in signal processing could explain how this works. ....snip... And if this actually works, my next question will be, "why did those nasa guy put their scope in space"? There are larger telescopes on Earth and with adaptive optics and other techniques they can record finer resolution than the telescopes in space. But the ones in space don't have weather or daytime, aren't limited to a small area of the field for the best resolution, don't require good seeing for diffraction-limited observing like AO systems do, don't have skyglow from the atmosphere (I suppose imaging through the gegenschein is limitingg) and many other factors that make it harder to do these things from Earth. They can take longer exposures and not have to take multiple exposures and stack and combine the images (maybe something similar is still done?). The information arriving at the telescope in space hasn't been spread around to where you have to go chasing it and putting it back in order. There is still interesting processing to do on space images. One certainly does want to combine multiple images - if nothing else, to reject false structures from cosmic-ray impacts in the detectors. This is why individual Hubble image exposures almost never go above half an orbit even when something is visible longer - the amount of image compromised starts to become too large. Standard procedure is to combine two with rejection of wild pixels that are high in one; and for critical applications more. As was done on the Deep Fields, there is also a gain to be had from multiple observations slightly offset from one another, sincethe cameras had to incororate compromises between pixel sampling and field of view. This technique, "drizzling", regains a bit of the resolution lost by undersampling the PSF core. And, incidentally, gives images which look just beautiful when magnified to the level of original pixels. So some things are still the same all over... Bill Keel |
#5
|
|||
|
|||
![]()
On 6 Dec 2004 21:26:10 -0600, William C. Keel
wrote: There is still interesting processing to do on space images... So some things are still the same all over... Thanks for the out-of-this-world reality for those of us stuck on Earth, Bill. Mike Simmons |
Thread Tools | |
Display Modes | |
|
|
![]() |
||||
Thread | Thread Starter | Forum | Replies | Last Post |
The Gravitational Instability Theory on the Formation of the Universe | Br Dan Izzo | Policy | 6 | September 7th 04 09:29 PM |
Earth rotation | don findlay | Astronomy Misc | 122 | July 9th 04 07:57 PM |
Pioneer 10 anomaly: Galileo, Ulysses? | James Harris | Astronomy Misc | 58 | January 28th 04 11:15 PM |
UFO Activities from Biblical Times (LONG TEXT) | Kazmer Ujvarosy | SETI | 2 | December 25th 03 07:33 PM |
Correlation between CMBR and Redshift Anisotropies. | The Ghost In The Machine | Astronomy Misc | 172 | August 30th 03 10:27 PM |