View Single Post
  #4  
Old January 26th 05, 03:57 AM
Greg Hennessy
external usenet poster
 
Posts: n/a
Default

In article ,
greywolf42 wrote:
I've told you that "resolution" is the incorrect word, and
sensitivity is the correct one,


And I see that you couldn't point me to any text that supports your case.


Well, you never asked for any. You also never pointed to any text that
supported your claim that resolution was the correct resolution was
the correct word. But if yoiu want to be pointed to text indicating
that the resolution of the DMR is in degrees, try
http://aether.lbl.gov/www/projects/c...MR_Images.html

begin quote:
Cosmic microwave background temperature data were extracted from
thereleased FITS files and then combined into two linear
combinations. The first is a weighted sum of the 53 and 90 GHz
channels which givesthe highest signal-to-noise ratio for cosmic
temperature variationsbut includes the Milky Way Galaxy as well. In
the second linearcombination, a multiple of the 31 GHz map is
subtracted from a weighted sum of the 53 plus 90 GHz channels to give
a "reduced map" that gives zero response to the observed Galaxy, zero
response to free-free emission, but full response to variations in the
cosmictemperature.

These maps have been smoothed with a 7 degree beam, giving an
effective angular resolution of 10 degrees. An all-sky image in
Galactic coordinates is plotted using the equal-area Mollweide
projection. The plane of the Milky Way Galaxy is horizontal across the
middle of each picture. Sagittarius is in the center of the map,Orion
is to the right, and Cygnus is to the left.
end quote.

And Roberts and Joseph disagree with your word use.


Well, if they do disagree, they can tell me. Joseph used the word in
quotes, and Roberts never mentioned me. You are using them as an
argument from authority.

But the issue is not
what word to use. Call it 'sensitivity' if you like. I'm not discussing
sigma (the statistical error). I'm discussing the physical resolution of
the "instrument" -- not the claimed error bar in a given experiment.


Well, I *am* discussing the sensitivity, and I'm discussing the ratio
of the signal strength to the statistical error in that signal. And
I'm also discussing the claimed error bar in a given experiment. Given
that I'm talking about that, I will admit to not knowing what you are
talking about. And
the sensitivity of a radiometer increases when the exposure time
increases.

and quoted you the paper that shows
that the resolution of the instrument in question is 7 degrees, not a
number of microK.


But Roberts and Joseph and Bjoern claim a few microK.


the microK is the sensitivity of the instrument.

The resolution is 7 degrees. Please use correct terminology.

Cite for this terminology is given in the COBE documentation:
http://lambda.gsfc.nasa.gov/product/cobe/about_dmr.cfm

begin quote:

Each differential radiometer measures the difference in power received
from two directions in the sky separated by 60 degrees, using a pair
of horn antennas. Each antenna has a 7 degree (FWHM) beam.

end quote.

The units of the sensitivity of the instrument is
Kelvin,


I believe that you are mistaken. The device is simply a multiple channel
intensity recorder, with 100 channels. Each channel "sensitive" to 0.1%.


You may believe anything you wish. However, to prove me wrong you need
to document a claim. You provide no documentation what so ever about
your claim of 100 channels, or the sensitivity of each channel. First
of all, you refer to "the device" when there are at least two devices
in question, one the COBE FIRAS, the second the DMR. The instrument
used to determine the temperature was the FIRAS, had 512 channels as
documented in
http://adsabs.harvard.edu/cgi-bin/bi...pJ...420..457F
The instrument used to determine the temperature fluxations was the
COBE DMR, which was actually three independant instruments that each
observed at a single
wavelength. http://lambda.gsfc.nasa.gov/product/cobe/about_dmr.cfm

and the relationship between sensitivity and observing time is
called the "radiometer equation" and can easily be found in any
standard text, including web pages such as
http://www.strw.leidenuniv.nl/~pvdwe.../awt2_13d.html
or
http://scienceworld.wolfram.com/phys...rEquation.html


And that does not match reality. Because the data is stored in binary form.
No matter how long you run the device, you can never exceed the capacity of
the storage medium.


What does this have to do with anything? Since no one has ever claimed
that a capacity of a storage medium has been exceeded, why do you
object? COBE had an onboard storage capacity, and the data was
telemetered down. If you wish to assert that the capacity of a storage
medium was exceeded, please provide documentation about it. If you
wish to claim that the quoted equation about the sensitivity of an
instrument and the exposure time does not match reality, give the
specific reason and a documentation.

An increase in sensitivity (meaning the error going down) as the
observing time increases is simple data analysis 101.


But we aren't discussing sigma -- even if you call it sensitivity, or call
it resolution.


I am EXPRESSELY discussing the sigma of the measurement. I STARTED
this discussion by disagreeing with you when you claimed COBE found
nothing but noise. I pointed out the signal strength was many times
the sigma of the sigma strength.

The physical resolution of the instrument is the same,
whether one makes one measurement or 1 million.


I am discussing the sensitivity of the instrument. Which does change
quite a lot if you take one measurement or one million measurements.
Which is one of the reasons I object to your useage of the word
resolution in an incorrect fashion.