A Space & astronomy forum. SpaceBanter.com

Go Back   Home » SpaceBanter.com forum » Astronomy and Astrophysics » Amateur Astronomy
Site Map Home Authors List Search Today's Posts Mark Forums Read Web Partners

Titan image from the USA- CHANCES ARE YOU HAVE HIS SEEING



 
 
Thread Tools Display Modes
  #1  
Old December 3rd 04, 04:10 PM
Howard Lester
external usenet poster
 
Posts: n/a
Default Titan image from the USA- CHANCES ARE YOU HAVE HIS SEEING


"Chris1011" wrote

Just for comparison, here's a link to another guy who apparently imaged

Titan with just an 8" scope and similarly good results:
http://www.paraimservices.com/nbc/as...e/protitan.htm


I'm dumbfounded!


By what - that you didn't come up with the "Smith-Cassegrain" design?

HL


  #2  
Old December 3rd 04, 06:56 PM
Pierre Vandevenne
external usenet poster
 
Posts: n/a
Default

Martin Brown wrote in
:

code should be able to get up to a factor of 3x better resolution on a
suitable target.
The difficulty would be in obtaining an accurate


Any signal processing / mathematical background on how to increase the
_resolution_ (not the signal to noise ratio) of a measuring device by
random/semi-random/deterministic (whatever actually) perturbations?

Even a 1 dimensional example would suit me.

If that works, great, but I want to understand the math behind it.

Given a perfect system, I believe we can agree that the sampling gives 2.5
data points at best, and that sampling 5 data points in the same domain
could get an accurate value for those 2.5 data points.

How does one get more than that?

If its possible to get more than the maximal resolution, what is the
process behind it?

---
Pierre Vandevenne - DataRescue sa/nv - www.datarescue.com
The IDA Pro Disassembler & Debugger - world leader in hostile code analysis
PhotoRescue - advanced data recovery for digital photographic media
latest review: http://www.pcmag.com/article2/0,1759,1590497,00.asp
  #3  
Old December 3rd 04, 08:37 PM
Martin Brown
external usenet poster
 
Posts: n/a
Default

Pierre Vandevenne wrote:

Martin Brown wrote in
:

code should be able to get up to a factor of 3x better resolution on a
suitable target.
The difficulty would be in obtaining an accurate


Any signal processing / mathematical background on how to increase the
_resolution_ (not the signal to noise ratio) of a measuring device by
random/semi-random/deterministic (whatever actually) perturbations?


Yes. There is lots, and some of it is still controversial decades later
but it works.

Burg found a maximum entropy solution for 1-D time series in 1967 that
eventually revolutionised oil exploration. Frieden found one of the
imaging solutions in 1972. Computing power and horrendous algorithmic
difficulties stymied things for imaging until the late 70's. Gull &
Daniell 1978, Image reconstruction from incomplete and noisy data,
Nature, 272, 686-690 should be in most libraries. From about 1980
efficient stable computer codes have been available for image deconvolution.

Wolfram has bit about it too (as does numerical recipes)

http://mathworld.wolfram.com/MaximumEntropyMethod.html

There is a price to pay. The resolution of a maxent reconstruction is
dependent on the local signal to noise ratio. So in hand waving terms
you are very much more certain about the exact position and flux of a
bright star than you are of a dim one. There are also artefacts that are
different to those in normal classical diffraction limited images.

A bit more detail from the NRAO school (I don't subscribe to their use
of a low resolution image as a priori information) with links is at:

http://www.cv.nrao.edu/~abridle/deconvol/node20.html

Even a 1 dimensional example would suit me.

If that works, great, but I want to understand the math behind it.

Given a perfect system, I believe we can agree that the sampling gives 2.5
data points at best, and that sampling 5 data points in the same domain
could get an accurate value for those 2.5 data points.

How does one get more than that?


Knowing that the sky never has negative brightness is the key. An
algorithm that is totally impractical but would work in principle is to
generate every possible positive image and blur them with your known
point spread function. Compare the mock data with your actual data and
keep only those test images that fit your data to within the noise. You
then choose some representative average out of all the test images that
when blurred with the psf that are consistent with your observations.

If its possible to get more than the maximal resolution, what is the
process behind it?


There are strong hints in the image of an unresolved point source that
it is truly unresolved over your maximum baseline. Good signal to noise
means that you can extrapolate conservatively what happens at unmeasured
spatial frequencies instead of just assuming classically that they are
all zero. It works extremely well in aperture synthesis radio astronomy
and not quite as well in conventional optics.

It can be worth a factor of 3 improvement in resolution for bright
detail under favourable conditions. There are other families of
regularised deconvolution around too. Maximum smoothness is popular
because it is much easier to compute.

Unsharp masking mimics some of these properties for a symmetric psf.

Regards,
Martin Brown
  #4  
Old December 4th 04, 12:11 AM
RichA
external usenet poster
 
Posts: n/a
Default

On 03 Dec 2004 15:53:17 GMT, (Chris1011) wrote:

Just for comparison, here's a link to another guy who apparently imaged

Titan with just an 8" scope and similarly good results:
http://www.paraimservices.com/nbc/as...e/protitan.htm


I'm dumbfounded!

Roland Christen


And with a Dynamax "Smith" Cassegrain to boot!
-Rich
  #5  
Old December 4th 04, 06:25 PM
Jon
external usenet poster
 
Posts: n/a
Default

Jon,

I'm not sure you followed the links and read what said there. It's a
little beyond a simple good/bad binary logic.
You can quantify the probability of getting good frames based strictly
on your seeing values (not estimated but MEASURED) and scope size .
If you don't have the means to measure your seeing , something like a
DIMM
setup, I don't see how you could make any judgment .
Most amateurs do not measure seeing and are not even aware that it
could be measured .
If you measured your seeing and the relationship between your seeing
and your scope size should produce N good frames over a period of time
and it produced none, AND you measured your telescope as an instrument
in the configuration you were using for imaging AND it was diffraction
limited, THEN please write a paper and publish the results . You might
be discovering a new mathematical model for atmospheric turbulence .
Until then, I'm not with you or Roland but with Kolmogorov .

best regards,
matt tudor


Matt, I admit I only glanced at the links. I took a closer look at the
simple webpage. I am getting in deep water here, but if I understand it
correctly, the equation shown above the graph gives the probability of
getting a diffraction limited frame for a given seeing and telescope
aperture. How was this equation arrived at? Empirical data/ theoretical
modelling/both? I assume the idea is that over (short) time there is
variation around the mean value of seeing, i.e. multiple seeing
measurements would show a certain distribution with variation around a
mean, and one tail of the distribution will correspond to the better than
average seeing.

In my opinion (not supported by DIMM, but subjective seeing asessments
based on the Pickering scale), the shape of the distribution can vary
greatly between nights. Especially when there is a strong jet stream above,
there is little variation around the mean, and I think on those nights
there will not be any really good frames however long one captures. Making
the exposures short will not overcome the problem; the "fast" seeing means
a constantly blurred image.

On a side note, I have looked a bit into the possibility of making DIMM
measurements with a web camera. The main obstacle is the software; I have
some DIMM software, but it is not compatible with any web cameras.

On my webpage http://home.no.net/jonbent/Sky.html#Anchor-Seein-40104
I have made an attempt to correlate my seeing estimates to meteorological
conditions. I freely admit I have no theoretical background in either
meteorology or astronomy, so my speculations may seem naive. If you have
any comments or corrections please contact me.
Jon Kristoffersen

  #6  
Old December 5th 04, 12:03 AM
Dan Mckenna
external usenet poster
 
Posts: n/a
Default

Hi Jon,

I just looked at your web site.

In regards to seeing:

The weather balloon reduction needs a few adjustments.
Look up "Dewan optical turbulence"

You will need to adjust the optical turbulence as a function of altitude
by the density of air to compute the refractive index.

The wind shear effect goes as the square of the wind gradient.
you also need an outer scale, the depth of the optically significant
turbulence.

I have been looking at this for a few years as well and have recently
come back from an observing run where I used a SCIDAR to measure the
altitude profile of the seeing while also measuring the image quality
from one second exposures.

It seems as if most of the seeing came from the layers near the ground.
we had an approaching front at the time and the upper wind were over 100
mph. The upper seeing was only about 10% or less of the lower layers (
within 3000 feet) and the telescope combined.

The upper layers appeared to have larger time scales compared to the
lower layers. The near ground layers had the smallest time scales and
largest amplitude.

The seeing varied over the 4 day run from 2.5 arc seconds down to 0.45
arc seconds.

It looks like the near ground layer and telescope enclosure have the
greatest amplitude.

Dan

Ps The Images being discussed in this thread are the results of the
image processing and do not represent the actual reference image.

even with adaptive optics people have "over reconstructed" images
as the system PSF is unstable and reconstruction tends to magnify
these errors.






Jon wrote:
Jon,

I'm not sure you followed the links and read what said there. It's a
little beyond a simple good/bad binary logic.
You can quantify the probability of getting good frames based strictly
on your seeing values (not estimated but MEASURED) and scope size .
If you don't have the means to measure your seeing , something like a
DIMM
setup, I don't see how you could make any judgment .
Most amateurs do not measure seeing and are not even aware that it
could be measured .
If you measured your seeing and the relationship between your seeing
and your scope size should produce N good frames over a period of time
and it produced none, AND you measured your telescope as an instrument
in the configuration you were using for imaging AND it was diffraction
limited, THEN please write a paper and publish the results . You might
be discovering a new mathematical model for atmospheric turbulence .
Until then, I'm not with you or Roland but with Kolmogorov .

best regards,
matt tudor



Matt, I admit I only glanced at the links. I took a closer look at the
simple webpage. I am getting in deep water here, but if I understand it
correctly, the equation shown above the graph gives the probability of
getting a diffraction limited frame for a given seeing and telescope
aperture. How was this equation arrived at? Empirical data/ theoretical
modelling/both? I assume the idea is that over (short) time there is
variation around the mean value of seeing, i.e. multiple seeing
measurements would show a certain distribution with variation around a
mean, and one tail of the distribution will correspond to the better than
average seeing.

In my opinion (not supported by DIMM, but subjective seeing asessments
based on the Pickering scale), the shape of the distribution can vary
greatly between nights. Especially when there is a strong jet stream above,
there is little variation around the mean, and I think on those nights
there will not be any really good frames however long one captures. Making
the exposures short will not overcome the problem; the "fast" seeing means
a constantly blurred image.

On a side note, I have looked a bit into the possibility of making DIMM
measurements with a web camera. The main obstacle is the software; I have
some DIMM software, but it is not compatible with any web cameras.

On my webpage http://home.no.net/jonbent/Sky.html#Anchor-Seein-40104
I have made an attempt to correlate my seeing estimates to meteorological
conditions. I freely admit I have no theoretical background in either
meteorology or astronomy, so my speculations may seem naive. If you have
any comments or corrections please contact me.
Jon Kristoffersen

  #7  
Old December 5th 04, 06:54 AM
Dan Chaffee
external usenet poster
 
Posts: n/a
Default

On Sat, 04 Dec 2004 17:03:37 -0700, Dan Mckenna
wrote:


I have been looking at this for a few years as well and have recently
come back from an observing run where I used a SCIDAR to measure the
altitude profile of the seeing while also measuring the image quality
from one second exposures.


Dan,
Can you give us the location of these measurements? I am most
curious if other regions that experience different combinations of
low level buoyancy, mid level inversions, and different shear profiles
would lead you to the same generalizations.

DC
  #8  
Old December 5th 04, 07:12 AM
Dan Chaffee
external usenet poster
 
Posts: n/a
Default

On Sat, 04 Dec 2004 19:50:53 -0500, RichA wrote:


That's why I hate winter in the North part of the U.S. and Canada.
The seeing is high-frequency boiling due to cooling of the ground.
Even if an object is high up in the sky, you see that horrible,
1-5 arc second shimmering. It kills planetary images. You have to
have days of cloudy weather, so the land can keep cool and then a
clear night to avoid it, and even then it never matches the seeing
you get in summer.


From Kansas City, I have logged numerous planetary sessions during
winter months --sometimes well below freezing-- where 1 arcsec or less
seeing was present for at least 20 or 30 minutes at a stretch. These
are usually periods where the upper level jet is not directly
overhead, yet upper winds in general may be well over 50 kts. And
summer around here can have weeks of 5 arcsec seeing on end.

Dan
  #9  
Old December 5th 04, 04:54 PM
Dan Mckenna
external usenet poster
 
Posts: n/a
Default

Dan Chaffee wrote:

Dan C,

Most of my measurements are now from Mt Graham taken at the VATT.
I have some data from the ridge at Mt Hopkins, Mt Bigelow and a few
hours at kitt peak. I am going to in line mode now, see me below

On Sat, 04 Dec 2004 17:03:37 -0700, Dan Mckenna
wrote:



I have been looking at this for a few years as well and have recently
come back from an observing run where I used a SCIDAR to measure the
altitude profile of the seeing while also measuring the image quality


from one second exposures.


Dan,
Can you give us the location of these measurements? I am most
curious if other regions that experience different combinations of
low level buoyancy,


Think in terms of atmospheric gravity waves. A good book is
An Intro to Atmospheric Gravity Waves
C. J Nappo

also

Small Scale Processes in Geophysical Fluid Flows
L H. Kantha & C. A. Clayson

Has good information on waves, drainage flows, and flows over
complex topography.

The nocturnal ground inversion is a place that can have intense wave
activity that can dominate the seeing.

If the area of interest is driven by a drainage flow it gets more
complicated. Drainage flows exist even for slopes less than 1 degree.


mid level inversions, and different shear profiles
would lead you to the same generalizations.


Mid level inversions are also places of gravity wave creation, if
it is associated with a jet, becomes a place where ducted waves in the
jet propagate up or down.

I see wind shear regions in my data and in some cases the wind shear
regions are due to propagating Gravity waves.






DC

  #10  
Old December 6th 04, 05:03 AM
Dan Chaffee
external usenet poster
 
Posts: n/a
Default

On Sun, 05 Dec 2004 09:54:58 -0700, Dan Mckenna
wrote:


Think in terms of atmospheric gravity waves. A good book is
An Intro to Atmospheric Gravity Waves
C. J Nappo

also

Small Scale Processes in Geophysical Fluid Flows
L H. Kantha & C. A. Clayson

Thanks, I'll be on the lookout for them.

Dan
 




Thread Tools
Display Modes

Posting Rules
You may not post new threads
You may not post replies
You may not post attachments
You may not edit your posts

vB code is On
Smilies are On
[IMG] code is On
HTML code is Off
Forum Jump

Similar Threads
Thread Thread Starter Forum Replies Last Post
UA's Cassini Scientists Ready for First Close Titan Flyby er Amateur Astronomy 0 October 26th 04 07:14 AM
UA's Cassini Scientists Ready for First Close Titan Flyby Ron Astronomy Misc 0 October 25th 04 08:35 PM
Cassini Image: The Veils of Titan Ron Astronomy Misc 0 May 6th 04 06:05 PM
Moons as Disks, Shadow Transits and Saturn's Divisions edz Amateur Astronomy 1 March 10th 04 09:57 PM


All times are GMT +1. The time now is 06:13 AM.


Powered by vBulletin® Version 3.6.4
Copyright ©2000 - 2024, Jelsoft Enterprises Ltd.
Copyright ©2004-2024 SpaceBanter.com.
The comments are property of their posters.