![]() |
|
|
Thread Tools | Display Modes |
|
#1
|
|||
|
|||
![]()
greywolf42 wrote in message
. .. Joseph Lazio wrote in message ... {snip} This is Data Analysis 101. Let your detector be anything you want it to be. Let it measure temperature on the sky, volts out of a voltmeter, whatever. If you take a long data stream from it, you can easily measure well below the "resolution" of the detector. LOL! Another proof-by-assertion. Citation, please. No response.... And (on the 15th): ========= it is well known that one can make specific kinds of measurements below the resolution limit of an instrument, Joseph, *why* do you keep repeating this silly statement? Many people make such claims, but it is not valid science or statistics. You can easily show me wrong, by directing me to a statistics treatise on how to perform measurements below the resolution of the instrument used. ========= No response, again..... A week ago, (in the sci.astro thread Cosmic Acceleration Rediscovered), Joseph Lazio repeated the claim that one can get data to better precision than the measuring instrument is physically capable of supporting. Tom Roberts (and Bill Rowe), on the other hand, have many times called such processes "overaveraging" (at least when it is applied to experiments that would otherwise disprove SR). i.e.: http://www.google.com/groups?selm=vr....supernews.com "And results reported implying an order of magnitude improvement in resolution over the best the instrument can achieve are very dubious." Now it's time to see these two newsgroup stars have at it, over the experimental and scientific principle of whether data can be "averaged" below the physical resolution (or sensitivity) of the apparatus! Is it overaveraging -- and invalid? Or is it simply data analysis 101 -- and valid? May the best argument win! -- greywolf42 ubi dubium ibi libertas {remove planet for return e-mail} |
#2
|
|||
|
|||
![]()
In article ,
greywolf42 wrote: Joseph, *why* do you keep repeating this silly statement? Many people make such claims, but it is not valid science or statistics. You can easily show me wrong, by directing me to a statistics treatise on how to perform measurements below the resolution of the instrument used. I've told you that "resolution" is the incorrect word, and sensitivity is the correct one, and quoted you the paper that shows that the resolution of the instrument in question is 7 degrees, not a number of microK. The units of the sensitivity of the instrument is Kelvin, and the relationship between sensitivity and observing time is called the "radiometer equation" and can easily be found in any standard text, including web pages such as http://www.strw.leidenuniv.nl/~pvdwe.../awt2_13d.html or http://scienceworld.wolfram.com/phys...rEquation.html Or is it simply data analysis 101 -- and valid? An increase in sensitivity (meaning the error going down) as the observing time increases is simple data analysis 101. |
#3
|
|||
|
|||
![]()
Greg Hennessy wrote in message
... In article , greywolf42 wrote: Joseph, *why* do you keep repeating this silly statement? Many people make such claims, but it is not valid science or statistics. You can easily show me wrong, by directing me to a statistics treatise on how to perform measurements below the resolution of the instrument used. I've told you that "resolution" is the incorrect word, and sensitivity is the correct one, And I see that you couldn't point me to any text that supports your case. And Roberts and Joseph disagree with your word use. But the issue is not what word to use. Call it 'sensitivity' if you like. I'm not discussing sigma (the statistical error). I'm discussing the physical resolution of the "instrument" -- not the claimed error bar in a given experiment. and quoted you the paper that shows that the resolution of the instrument in question is 7 degrees, not a number of microK. But Roberts and Joseph and Bjoern claim a few microK. The units of the sensitivity of the instrument is Kelvin, I believe that you are mistaken. The device is simply a multiple channel intensity recorder, with 100 channels. Each channel "sensitive" to 0.1%. and the relationship between sensitivity and observing time is called the "radiometer equation" and can easily be found in any standard text, including web pages such as http://www.strw.leidenuniv.nl/~pvdwe.../awt2_13d.html or http://scienceworld.wolfram.com/phys...rEquation.html And that does not match reality. Because the data is stored in binary form. No matter how long you run the device, you can never exceed the capacity of the storage medium. Or is it simply data analysis 101 -- and valid? An increase in sensitivity (meaning the error going down) as the observing time increases is simple data analysis 101. But we aren't discussing sigma -- even if you call it sensitivity, or call it resolution. The physical resolution of the instrument is the same, whether one makes one measurement or 1 million. -- greywolf42 ubi dubium ibi libertas {remove planet for return e-mail} |
#4
|
|||
|
|||
![]()
In article ,
greywolf42 wrote: I've told you that "resolution" is the incorrect word, and sensitivity is the correct one, And I see that you couldn't point me to any text that supports your case. Well, you never asked for any. You also never pointed to any text that supported your claim that resolution was the correct resolution was the correct word. But if yoiu want to be pointed to text indicating that the resolution of the DMR is in degrees, try http://aether.lbl.gov/www/projects/c...MR_Images.html begin quote: Cosmic microwave background temperature data were extracted from thereleased FITS files and then combined into two linear combinations. The first is a weighted sum of the 53 and 90 GHz channels which givesthe highest signal-to-noise ratio for cosmic temperature variationsbut includes the Milky Way Galaxy as well. In the second linearcombination, a multiple of the 31 GHz map is subtracted from a weighted sum of the 53 plus 90 GHz channels to give a "reduced map" that gives zero response to the observed Galaxy, zero response to free-free emission, but full response to variations in the cosmictemperature. These maps have been smoothed with a 7 degree beam, giving an effective angular resolution of 10 degrees. An all-sky image in Galactic coordinates is plotted using the equal-area Mollweide projection. The plane of the Milky Way Galaxy is horizontal across the middle of each picture. Sagittarius is in the center of the map,Orion is to the right, and Cygnus is to the left. end quote. And Roberts and Joseph disagree with your word use. Well, if they do disagree, they can tell me. Joseph used the word in quotes, and Roberts never mentioned me. You are using them as an argument from authority. But the issue is not what word to use. Call it 'sensitivity' if you like. I'm not discussing sigma (the statistical error). I'm discussing the physical resolution of the "instrument" -- not the claimed error bar in a given experiment. Well, I *am* discussing the sensitivity, and I'm discussing the ratio of the signal strength to the statistical error in that signal. And I'm also discussing the claimed error bar in a given experiment. Given that I'm talking about that, I will admit to not knowing what you are talking about. And the sensitivity of a radiometer increases when the exposure time increases. and quoted you the paper that shows that the resolution of the instrument in question is 7 degrees, not a number of microK. But Roberts and Joseph and Bjoern claim a few microK. the microK is the sensitivity of the instrument. The resolution is 7 degrees. Please use correct terminology. Cite for this terminology is given in the COBE documentation: http://lambda.gsfc.nasa.gov/product/cobe/about_dmr.cfm begin quote: Each differential radiometer measures the difference in power received from two directions in the sky separated by 60 degrees, using a pair of horn antennas. Each antenna has a 7 degree (FWHM) beam. end quote. The units of the sensitivity of the instrument is Kelvin, I believe that you are mistaken. The device is simply a multiple channel intensity recorder, with 100 channels. Each channel "sensitive" to 0.1%. You may believe anything you wish. However, to prove me wrong you need to document a claim. You provide no documentation what so ever about your claim of 100 channels, or the sensitivity of each channel. First of all, you refer to "the device" when there are at least two devices in question, one the COBE FIRAS, the second the DMR. The instrument used to determine the temperature was the FIRAS, had 512 channels as documented in http://adsabs.harvard.edu/cgi-bin/bi...pJ...420..457F The instrument used to determine the temperature fluxations was the COBE DMR, which was actually three independant instruments that each observed at a single wavelength. http://lambda.gsfc.nasa.gov/product/cobe/about_dmr.cfm and the relationship between sensitivity and observing time is called the "radiometer equation" and can easily be found in any standard text, including web pages such as http://www.strw.leidenuniv.nl/~pvdwe.../awt2_13d.html or http://scienceworld.wolfram.com/phys...rEquation.html And that does not match reality. Because the data is stored in binary form. No matter how long you run the device, you can never exceed the capacity of the storage medium. What does this have to do with anything? Since no one has ever claimed that a capacity of a storage medium has been exceeded, why do you object? COBE had an onboard storage capacity, and the data was telemetered down. If you wish to assert that the capacity of a storage medium was exceeded, please provide documentation about it. If you wish to claim that the quoted equation about the sensitivity of an instrument and the exposure time does not match reality, give the specific reason and a documentation. An increase in sensitivity (meaning the error going down) as the observing time increases is simple data analysis 101. But we aren't discussing sigma -- even if you call it sensitivity, or call it resolution. I am EXPRESSELY discussing the sigma of the measurement. I STARTED this discussion by disagreeing with you when you claimed COBE found nothing but noise. I pointed out the signal strength was many times the sigma of the sigma strength. The physical resolution of the instrument is the same, whether one makes one measurement or 1 million. I am discussing the sensitivity of the instrument. Which does change quite a lot if you take one measurement or one million measurements. Which is one of the reasons I object to your useage of the word resolution in an incorrect fashion. |
#5
|
|||
|
|||
![]()
greywolf42 wrote:
Greg Hennessy wrote in message ... In article , greywolf42 wrote: [snip] and quoted you the paper that shows that the resolution of the instrument in question is 7 degrees, not a number of microK. But Roberts and Joseph and Bjoern claim a few microK. Well, I simply used your term and did not bother to correct you. But Greg is right - "resolution" is simply the wrong term here. The resolution of an instrument refers to its ability to distinguish things which are spatially close to one another. Look it up at www.m-w.org. The relevant definition would be 1h he "the process or capability of making distinguishable the individual parts of an object, closely adjacent optical images, or sources of light" OTOH, the "sensitivity" of an instrument tells us how small the signal is which it can still measure (and distinguish from noise). The units of the sensitivity of the instrument is Kelvin, I believe that you are mistaken. The device is simply a multiple channel intensity recorder, with 100 channels. Each channel "sensitive" to 0.1%. And where did you get this from? [snip] Bye, Bjoern |
#6
|
|||
|
|||
![]()
"g" == greywolf42 writes:
g Greg Hennessy wrote in message g ... I've told you that "resolution" is the incorrect word, and sensitivity is the correct one, [...] g And Roberts and Joseph disagree with your word use. For the record, I do not. g But the issue is not what word to use. Call it 'sensitivity' if g you like. I'm not discussing sigma (the statistical error). I'm g discussing the physical resolution of the "instrument" -- not the g claimed error bar in a given experiment. and quoted you the paper that shows that the resolution of the instrument in question is 7 degrees, not a number of microK. g But Roberts and Joseph and Bjoern claim a few microK. You appear to be conflating an angular resolution with a temperature sensitivity. The *angular* resolution of one of the instruments on COBE was 7 degrees on the sky. That is entirely separate from the sensitivity with which it could measure a temperature, which was measured in microKelvin. -- Lt. Lazio, HTML police | e-mail: No means no, stop rape. | http://patriot.net/%7Ejlazio/ sci.astro FAQ at http://sciastro.astronomy.net/sci.astro.html |
#7
|
|||
|
|||
![]()
[Regarding the resolution vs. sensitivity of various instruments on COBE...]
"JL" == Joseph Lazio writes: "g" == greywolf42 writes: and quoted you the paper that shows that the resolution of the instrument in question is 7 degrees, not a number of microK. g But Roberts and Joseph and Bjoern claim a few microK. JL You appear to be conflating an angular resolution with a JL temperature sensitivity. The *angular* resolution of one of the JL instruments on COBE was 7 degrees on the sky. That is entirely JL separate from the sensitivity with which it could measure a JL temperature, which was measured in microKelvin. At the risk of beating this point to death, I can perhaps make this point more clear. The *angular* resolution of one of the instruments on COBE was 7 degrees on the sky. For comparison, the full Moon is about 0.5 degrees across, and the unaided human eye has a typical resolution of about 0.02 degrees. Astronomers (particularly radio astronomers) often measure the intensity or brightness of radiation in terms of an equivalent temperature. This is in some sense a "shorthand" notation. To make it more clear, 1 microKelvin at the peak wavelength of the CMB (1.869 mm) is equivalent to an intensity of 7.9E-24 W/m^2/Hz/sr. (This perhaps illustrates one of the reasons we use the "shorthand" of an equivalent temperature. It's a lot easier to speak in terms of microKelvin degrees than in terms of Watts per square meter per Hertz per steradian.) -- Lt. Lazio, HTML police | e-mail: No means no, stop rape. | http://patriot.net/%7Ejlazio/ sci.astro FAQ at http://sciastro.astronomy.net/sci.astro.html |
#8
|
|||
|
|||
![]() "Joseph Lazio" wrote in message ... [Regarding the resolution vs. sensitivity of various instruments on COBE...] "JL" == Joseph Lazio writes: "g" == greywolf42 writes: and quoted you the paper that shows that the resolution of the instrument in question is 7 degrees, not a number of microK. g But Roberts and Joseph and Bjoern claim a few microK. JL You appear to be conflating an angular resolution with a JL temperature sensitivity. The *angular* resolution of one of the JL instruments on COBE was 7 degrees on the sky. That is entirely JL separate from the sensitivity with which it could measure a JL temperature, which was measured in microKelvin. At the risk of beating this point to death, I can perhaps make this point more clear. The *angular* resolution of one of the instruments on COBE was 7 degrees on the sky. For comparison, the full Moon is about 0.5 degrees across, and the unaided human eye has a typical resolution of about 0.02 degrees. Astronomers (particularly radio astronomers) often measure the intensity or brightness of radiation in terms of an equivalent temperature. This is in some sense a "shorthand" notation. To make it more clear, 1 microKelvin at the peak wavelength of the CMB (1.869 mm) is equivalent to an intensity of 7.9E-24 W/m^2/Hz/sr. (This perhaps illustrates one of the reasons we use the "shorthand" of an equivalent temperature. It's a lot easier to speak in terms of microKelvin degrees than in terms of Watts per square meter per Hertz per steradian.) To further muddy the waters, I believe from memory gw sometimes talked of "resolution per bin" or similar. I wonder whether the original figure (cited by Lerner?) might have been referring to the resolution of an ADC in the measurement system, i.e. the equivalent power of the least significant bit of the digitiser. Just a thought and without the original reference I have no way to check. George |
#9
|
|||
|
|||
![]()
greywolf42 wrote:
Joseph Lazio wrote in message ... This is Data Analysis 101. Let your detector be anything you want it to be. Let it measure temperature on the sky, volts out of a voltmeter, whatever. If you take a long data stream from it, you can easily measure well below the "resolution" of the detector. [later] it is well known that one can make specific kinds of measurements below the resolution limit of an instrument, Joseph, *why* do you keep repeating this silly statement? Many people make such claims, but it is not valid science or statistics. You can easily show me wrong, by directing me to a statistics treatise on how to perform measurements below the resolution of the instrument used. N.C.Barford, _Experimental_Measurements:_Precision,_Error,_and_ Truth_. This is old and elementary, but it's what we used in the version of "Data Analysis 101" I took 30-some years ago. I do not disagree with what Joseph Lazio wrote above. But greywolf42's lack of knowledge and inability to read have apparently caused him to think otherwise. This is all well known, and is indeed "Data Analysis 101" -- greywolf42 explicitly displays his ignorance here. Tom Roberts (and Bill Rowe), on the other hand, have many times called such processes "overaveraging" (at least when it is applied to experiments that would otherwise disprove SR). i.e.: http://www.google.com/groups?selm=vr....supernews.com "And results reported implying an order of magnitude improvement in resolution over the best the instrument can achieve are very dubious." Yes. A discussion: For a basic measurement like that of the width of my desk, a given technique has a given resolution. For example this meter stick is marked in millimeters, and I can read it to about 0.2 mm resolution. So using it to make a single measurement of the desk, I obtain an answer accurate to ~0.2 mm. If I make a series of such measurements that are STATISTICALLY INDEPENDENT I can improve that accuracy to the limit of the systematic errors involved, by averaging multiple measurements. To make them statistically independent, in this case I must re-apply the meter stick to the desk for each measurement (merely re-reading the scale without repositioning the stick would not give independent measurements). As is well known, under these conditions, the mean of the multiple measurements approaches the actual value to within an error determined by the systematic errors combined with the intrinsic error of the meter stick (~0.2 mm) divided by the square root of the number of measurements contributing to the mean. In this case, some of the systematic errors a errors in scribing the marks on the meter stick optical parallax temperature difference in the meter stick between its calibration and use It should be clear that none of these error sources are affected by averaging, and they are related to the meter stick's construction and manner of use. Now the manufacturer of the meter stick knows about these systematic errors, and does not make heroic efforts to reduce them below a human's ability to read and use it, so they are not enormously smaller than ~0.2 mm. That applies to essentially any instrument. That's why averaging many readings is highly suspect when someone claims an improvement of an order of magnitude over the intrinsic resolution of the instrument. [For instance, wear on the end of the stick can be comparable to that accuracy. That's why the 0 mark is not at the end.] In the measurments greywolf42 references above, on which I commented that they involved overaveraging, the experimenters claimed an improvement of more than an order of magnitude by averaging. None of them could claim their systematic errors were samll enough to justify that smaller resolution. Moreover, most of them had a clear human bias in roundoff, which makes multiple measurements be statistically correlated, which means that averaging does not improve the actual resolution of the mean below the amount of roundoff. For instance, if when reading that meter stick I always rounded up to the next millimeter, it should be clear that the value I obtain will be larger than the actual value, and no amount of averaging multiple measurements will improve the accuracy of the measurement below ~0.5 mm. Tom Roberts |
#10
|
|||
|
|||
![]() "Tom Roberts" wrote in message . com... greywolf42 wrote: Joseph Lazio wrote in message ... This is Data Analysis 101. Let your detector be anything you want it to be. Let it measure temperature on the sky, volts out of a voltmeter, whatever. If you take a long data stream from it, you can easily measure well below the "resolution" of the detector. [later] it is well known that one can make specific kinds of measurements below the resolution limit of an instrument, Joseph, *why* do you keep repeating this silly statement? Many people make such claims, but it is not valid science or statistics. You can easily show me wrong, by directing me to a statistics treatise on how to perform measurements below the resolution of the instrument used. N.C.Barford, _Experimental_Measurements:_Precision,_Error,_and_ Truth_. This is old and elementary, but it's what we used in the version of "Data Analysis 101" I took 30-some years ago. I do not disagree with what Joseph Lazio wrote above. But greywolf42's lack of knowledge and inability to read have apparently caused him to think otherwise. This is all well known, and is indeed "Data Analysis 101" -- greywolf42 explicitly displays his ignorance here. Tom Roberts (and Bill Rowe), on the other hand, have many times called such processes "overaveraging" (at least when it is applied to experiments that would otherwise disprove SR). i.e.: http://www.google.com/groups?selm=vr....supernews.com "And results reported implying an order of magnitude improvement in resolution over the best the instrument can achieve are very dubious." Yes. A discussion: For a basic measurement like that of the width of my desk, a given technique has a given resolution. For example this meter stick is marked in millimeters, and I can read it to about 0.2 mm resolution. So using it to make a single measurement of the desk, I obtain an answer accurate to ~0.2 mm. If I make a series of such measurements that are STATISTICALLY INDEPENDENT I can improve that accuracy to the limit of the systematic errors involved, by averaging multiple measurements. To make them statistically independent, in this case I must re-apply the meter stick to the desk for each measurement (merely re-reading the scale without repositioning the stick would not give independent measurements). As is well known, under these conditions, the mean of the multiple measurements approaches the actual value to within an error determined by the systematic errors combined with the intrinsic error of the meter stick (~0.2 mm) divided by the square root of the number of measurements contributing to the mean. In this case, some of the systematic errors a errors in scribing the marks on the meter stick optical parallax temperature difference in the meter stick between its calibration and use It should be clear that none of these error sources are affected by averaging, and they are related to the meter stick's construction and manner of use. Now the manufacturer of the meter stick knows about these systematic errors, and does not make heroic efforts to reduce them below a human's ability to read and use it, so they are not enormously smaller than ~0.2 mm. That applies to essentially any instrument. That's why averaging many readings is highly suspect when someone claims an improvement of an order of magnitude over the intrinsic resolution of the instrument. [For instance, wear on the end of the stick can be comparable to that accuracy. That's why the 0 mark is not at the end.] In the measurments greywolf42 references above, on which I commented that they involved overaveraging, the experimenters claimed an improvement of more than an order of magnitude by averaging. None of them could claim their systematic errors were samll enough to justify that smaller resolution. Moreover, most of them had a clear human bias in roundoff, which makes multiple measurements be statistically correlated, which means that averaging does not improve the actual resolution of the mean below the amount of roundoff. For instance, if when reading that meter stick I always rounded up to the next millimeter, it should be clear that the value I obtain will be larger than the actual value, and no amount of averaging multiple measurements will improve the accuracy of the measurement below ~0.5 mm. Tom Roberts Excellent! Harald |
Thread Tools | |
Display Modes | |
|
|
![]() |
||||
Thread | Thread Starter | Forum | Replies | Last Post |
Constellation rise times versus hour of darkness .....etc | Jim | UK Astronomy | 5 | July 27th 04 07:01 AM |
Article re Harvard OSETI w/Horowitz, Tarter, Lazio et al | Jason H. | SETI | 2 | May 21st 04 11:17 PM |
James Harris versus |-|erc versus OM | James Harris | Space Shuttle | 0 | August 1st 03 09:01 AM |