Pierre Vandevenne wrote:
Martin Brown wrote in
:
code should be able to get up to a factor of 3x better resolution on a
suitable target.
The difficulty would be in obtaining an accurate
Any signal processing / mathematical background on how to increase the
_resolution_ (not the signal to noise ratio) of a measuring device by
random/semi-random/deterministic (whatever actually) perturbations?
Yes. There is lots, and some of it is still controversial decades later
but it works.
Burg found a maximum entropy solution for 1-D time series in 1967 that
eventually revolutionised oil exploration. Frieden found one of the
imaging solutions in 1972. Computing power and horrendous algorithmic
difficulties stymied things for imaging until the late 70's. Gull &
Daniell 1978, Image reconstruction from incomplete and noisy data,
Nature, 272, 686-690 should be in most libraries. From about 1980
efficient stable computer codes have been available for image deconvolution.
Wolfram has bit about it too (as does numerical recipes)
http://mathworld.wolfram.com/MaximumEntropyMethod.html
There is a price to pay. The resolution of a maxent reconstruction is
dependent on the local signal to noise ratio. So in hand waving terms
you are very much more certain about the exact position and flux of a
bright star than you are of a dim one. There are also artefacts that are
different to those in normal classical diffraction limited images.
A bit more detail from the NRAO school (I don't subscribe to their use
of a low resolution image as a priori information) with links is at:
http://www.cv.nrao.edu/~abridle/deconvol/node20.html
Even a 1 dimensional example would suit me.
If that works, great, but I want to understand the math behind it.
Given a perfect system, I believe we can agree that the sampling gives 2.5
data points at best, and that sampling 5 data points in the same domain
could get an accurate value for those 2.5 data points.
How does one get more than that?
Knowing that the sky never has negative brightness is the key. An
algorithm that is totally impractical but would work in principle is to
generate every possible positive image and blur them with your known
point spread function. Compare the mock data with your actual data and
keep only those test images that fit your data to within the noise. You
then choose some representative average out of all the test images that
when blurred with the psf that are consistent with your observations.
If its possible to get more than the maximal resolution, what is the
process behind it?
There are strong hints in the image of an unresolved point source that
it is truly unresolved over your maximum baseline. Good signal to noise
means that you can extrapolate conservatively what happens at unmeasured
spatial frequencies instead of just assuming classically that they are
all zero. It works extremely well in aperture synthesis radio astronomy
and not quite as well in conventional optics.
It can be worth a factor of 3 improvement in resolution for bright
detail under favourable conditions. There are other families of
regularised deconvolution around too. Maximum smoothness is popular
because it is much easier to compute.
Unsharp masking mimics some of these properties for a symmetric psf.
Regards,
Martin Brown