View Single Post
  #2  
Old February 23rd 07, 12:27 PM posted to sci.image.processing,sci.astro.ccd-imaging,comp.graphics.algorithms
ImageAnalyst
external usenet poster
 
Posts: 2
Default Matching images from different sources.

Roberto:
I'm not sure why my lengthy reply from yesterday isn't there. I've
seen this once before from Google Groups in the past month - where it
says it posted successfully but then it never shows up. Anyway, it
was something about building up feature vectors. But I had another
thought. In some fields (medical, remote sensing, military) they have
a problem such as yours. The terms you want to search for are "image
fusion" or "data fusion" and have to do with aligning images from
different modalities, like how can you overlap corresponding physical
slices from a CT image and an MRI image. I've never really had to do
fusion this myself but I know it was (and maybe still is) a hot topic
in medical imaging in the 90's.
Try this:

http://www.google.com/search?hl=en&q=image+fusion

You just missed the image fusion conference but maybe you can get
proceedings, or go next year:
http://www.iqpc.com/cgi-bin/template...9&event=11435&

Hoping this posts (please Google!!!)
ImageAnalyst

On Feb 22, 6:59 pm, Roberto Waltman wrote:
Looking for information, algorithms, etc. on how to match images of
the same object obtained from different sources.

(Also on what would be the proper terminology to describe this
problem. I'm sure I am doing a poor job here. )

For example, I may take pictures of a cloud formation using three
cameras sensible to the visible, infrared and ultraviolet spectra.
The cameras, although close to each other, may be located far enough
to introduce parallax errors, they may have different resolutions, the
images capture may not be simultaneous, so the cloud shapes may change
slightly from one image to the next, etc.

By 'matching' I mean scaling and rotating the images so that they can
be overlaid in such a way that all the data in any area of the screen
is coming from the same 'region' in the physical world.

The matching process should be based only in the images, I may not
have enough information about the cameras physical location and
orientation.

I understand that in the most general case the images could be so
different that this problem is unsolvable, but I still expect to be
able to find (partial) solutions when some minimal correlation level
exists.

Thanks,

Roberto Waltman

[ Please reply to the group,
return address is invalid ]