A Space & astronomy forum. SpaceBanter.com

Go Back   Home » SpaceBanter.com forum » Astronomy and Astrophysics » Astronomy Misc
Site Map Home Authors List Search Today's Posts Mark Forums Read Web Partners

Roberts versus Lazio on "Overaveraging"



 
 
Thread Tools Display Modes
  #1  
Old January 18th 05, 06:08 PM
greywolf42
external usenet poster
 
Posts: n/a
Default Roberts versus Lazio on "Overaveraging"

greywolf42 wrote in message
. ..
Joseph Lazio wrote in message
...


{snip}

This is Data Analysis 101. Let your detector be anything you want it
to be. Let it measure temperature on the sky, volts out of a
voltmeter, whatever. If you take a long data stream from it, you can
easily measure well below the "resolution" of the detector.


LOL! Another proof-by-assertion. Citation, please.


No response....

And (on the 15th):
=========
it is well known that one can make specific kinds of measurements
below the resolution limit of an instrument,


Joseph, *why* do you keep repeating this silly statement? Many people make
such claims, but it is not valid science or statistics. You can easily show
me wrong, by directing me to a statistics treatise on how to perform
measurements below the resolution of the instrument used.
=========

No response, again.....


A week ago, (in the sci.astro thread Cosmic Acceleration Rediscovered),
Joseph Lazio repeated the claim that one can get data to better precision
than the measuring instrument is physically capable of supporting.

Tom Roberts (and Bill Rowe), on the other hand, have many times called such
processes "overaveraging" (at least when it is applied to experiments that
would otherwise disprove SR). i.e.:
http://www.google.com/groups?selm=vr....supernews.com

"And results reported implying an order of magnitude improvement in
resolution over the best the instrument can achieve are very dubious."



Now it's time to see these two newsgroup stars have at it, over the
experimental and scientific principle of whether data can be "averaged"
below the physical resolution (or sensitivity) of the apparatus!

Is it overaveraging -- and invalid?

Or is it simply data analysis 101 -- and valid?

May the best argument win!

--
greywolf42
ubi dubium ibi libertas
{remove planet for return e-mail}



  #2  
Old January 18th 05, 07:13 PM
Greg Hennessy
external usenet poster
 
Posts: n/a
Default

In article ,
greywolf42 wrote:

Joseph, *why* do you keep repeating this silly statement? Many people make
such claims, but it is not valid science or statistics. You can easily show
me wrong, by directing me to a statistics treatise on how to perform
measurements below the resolution of the instrument used.


I've told you that "resolution" is the incorrect word, and
sensitivity is the correct one, and quoted you the paper that shows
that the resolution of the instrument in question is 7 degrees, not a
number of microK. The units of the sensitivity of the instrument is
Kelvin, and the relationship between sensitivity and observing time is
called the "radiometer equation" and can easily be found in any
standard text, including web pages such as
http://www.strw.leidenuniv.nl/~pvdwe.../awt2_13d.html
or
http://scienceworld.wolfram.com/phys...rEquation.html

Or is it simply data analysis 101 -- and valid?


An increase in sensitivity (meaning the error going down) as the
observing time increases is simple data analysis 101.

  #3  
Old January 19th 05, 06:41 AM
Tom Roberts
external usenet poster
 
Posts: n/a
Default

greywolf42 wrote:
Joseph Lazio wrote in message
...
This is Data Analysis 101. Let your detector be anything you want it
to be. Let it measure temperature on the sky, volts out of a
voltmeter, whatever. If you take a long data stream from it, you can
easily measure well below the "resolution" of the detector.
[later]
it is well known that one can make specific kinds of measurements
below the resolution limit of an instrument,


Joseph, *why* do you keep repeating this silly statement? Many people make
such claims, but it is not valid science or statistics. You can easily show
me wrong, by directing me to a statistics treatise on how to perform
measurements below the resolution of the instrument used.


N.C.Barford, _Experimental_Measurements:_Precision,_Error,_and_ Truth_.
This is old and elementary, but it's what we used in the version of
"Data Analysis 101" I took 30-some years ago.

I do not disagree with what Joseph Lazio wrote above. But greywolf42's
lack of knowledge and inability to read have apparently caused him to
think otherwise. This is all well known, and is indeed "Data Analysis
101" -- greywolf42 explicitly displays his ignorance here.


Tom Roberts (and Bill Rowe), on the other hand, have many times called such
processes "overaveraging" (at least when it is applied to experiments that
would otherwise disprove SR). i.e.:
http://www.google.com/groups?selm=vr....supernews.com

"And results reported implying an order of magnitude improvement in
resolution over the best the instrument can achieve are very dubious."


Yes. A discussion:

For a basic measurement like that of the width of my desk, a given
technique has a given resolution. For example this meter stick is marked
in millimeters, and I can read it to about 0.2 mm resolution. So using
it to make a single measurement of the desk, I obtain an answer accurate
to ~0.2 mm. If I make a series of such measurements that are
STATISTICALLY INDEPENDENT I can improve that accuracy to the limit of
the systematic errors involved, by averaging multiple measurements. To
make them statistically independent, in this case I must re-apply the
meter stick to the desk for each measurement (merely re-reading the
scale without repositioning the stick would not give independent
measurements). As is well known, under these conditions, the mean of the
multiple measurements approaches the actual value to within an error
determined by the systematic errors combined with the intrinsic error of
the meter stick (~0.2 mm) divided by the square root of the number of
measurements contributing to the mean. In this case, some of the
systematic errors a
errors in scribing the marks on the meter stick
optical parallax
temperature difference in the meter stick between its
calibration and use
It should be clear that none of these error sources are affected by
averaging, and they are related to the meter stick's construction and
manner of use. Now the manufacturer of the meter stick knows about these
systematic errors, and does not make heroic efforts to reduce them below
a human's ability to read and use it, so they are not enormously smaller
than ~0.2 mm. That applies to essentially any instrument. That's why
averaging many readings is highly suspect when someone claims an
improvement of an order of magnitude over the intrinsic resolution of
the instrument.

[For instance, wear on the end of the stick can be comparable
to that accuracy. That's why the 0 mark is not at the end.]


In the measurments greywolf42 references above, on which I commented
that they involved overaveraging, the experimenters claimed an
improvement of more than an order of magnitude by averaging. None of
them could claim their systematic errors were samll enough to justify
that smaller resolution. Moreover, most of them had a clear human bias
in roundoff, which makes multiple measurements be statistically
correlated, which means that averaging does not improve the actual
resolution of the mean below the amount of roundoff.

For instance, if when reading that meter stick I always
rounded up to the next millimeter, it should be clear that
the value I obtain will be larger than the actual value,
and no amount of averaging multiple measurements will
improve the accuracy of the measurement below ~0.5 mm.


Tom Roberts
  #4  
Old January 19th 05, 12:56 PM
Harry
external usenet poster
 
Posts: n/a
Default


"Tom Roberts" wrote in message
. com...
greywolf42 wrote:
Joseph Lazio wrote in message
...
This is Data Analysis 101. Let your detector be anything you want it
to be. Let it measure temperature on the sky, volts out of a
voltmeter, whatever. If you take a long data stream from it, you can
easily measure well below the "resolution" of the detector.
[later]
it is well known that one can make specific kinds of measurements
below the resolution limit of an instrument,


Joseph, *why* do you keep repeating this silly statement? Many people

make
such claims, but it is not valid science or statistics. You can easily

show
me wrong, by directing me to a statistics treatise on how to perform
measurements below the resolution of the instrument used.


N.C.Barford, _Experimental_Measurements:_Precision,_Error,_and_ Truth_.
This is old and elementary, but it's what we used in the version of
"Data Analysis 101" I took 30-some years ago.

I do not disagree with what Joseph Lazio wrote above. But greywolf42's
lack of knowledge and inability to read have apparently caused him to
think otherwise. This is all well known, and is indeed "Data Analysis
101" -- greywolf42 explicitly displays his ignorance here.


Tom Roberts (and Bill Rowe), on the other hand, have many times called

such
processes "overaveraging" (at least when it is applied to experiments

that
would otherwise disprove SR). i.e.:
http://www.google.com/groups?selm=vr....supernews.com

"And results reported implying an order of magnitude improvement in
resolution over the best the instrument can achieve are very dubious."


Yes. A discussion:

For a basic measurement like that of the width of my desk, a given
technique has a given resolution. For example this meter stick is marked
in millimeters, and I can read it to about 0.2 mm resolution. So using
it to make a single measurement of the desk, I obtain an answer accurate
to ~0.2 mm. If I make a series of such measurements that are
STATISTICALLY INDEPENDENT I can improve that accuracy to the limit of
the systematic errors involved, by averaging multiple measurements. To
make them statistically independent, in this case I must re-apply the
meter stick to the desk for each measurement (merely re-reading the
scale without repositioning the stick would not give independent
measurements). As is well known, under these conditions, the mean of the
multiple measurements approaches the actual value to within an error
determined by the systematic errors combined with the intrinsic error of
the meter stick (~0.2 mm) divided by the square root of the number of
measurements contributing to the mean. In this case, some of the
systematic errors a
errors in scribing the marks on the meter stick
optical parallax
temperature difference in the meter stick between its
calibration and use
It should be clear that none of these error sources are affected by
averaging, and they are related to the meter stick's construction and
manner of use. Now the manufacturer of the meter stick knows about these
systematic errors, and does not make heroic efforts to reduce them below
a human's ability to read and use it, so they are not enormously smaller
than ~0.2 mm. That applies to essentially any instrument. That's why
averaging many readings is highly suspect when someone claims an
improvement of an order of magnitude over the intrinsic resolution of
the instrument.

[For instance, wear on the end of the stick can be comparable
to that accuracy. That's why the 0 mark is not at the end.]


In the measurments greywolf42 references above, on which I commented
that they involved overaveraging, the experimenters claimed an
improvement of more than an order of magnitude by averaging. None of
them could claim their systematic errors were samll enough to justify
that smaller resolution. Moreover, most of them had a clear human bias
in roundoff, which makes multiple measurements be statistically
correlated, which means that averaging does not improve the actual
resolution of the mean below the amount of roundoff.

For instance, if when reading that meter stick I always
rounded up to the next millimeter, it should be clear that
the value I obtain will be larger than the actual value,
and no amount of averaging multiple measurements will
improve the accuracy of the measurement below ~0.5 mm.

Tom Roberts


Excellent!
Harald


  #5  
Old January 19th 05, 06:45 PM
greywolf42
external usenet poster
 
Posts: n/a
Default

Tom Roberts wrote in message
. com...
greywolf42 wrote:
Joseph Lazio wrote in message
...


This is Data Analysis 101. Let your detector be anything you want it
to be. Let it measure temperature on the sky, volts out of a
voltmeter, whatever. If you take a long data stream from it, you can
easily measure well below the "resolution" of the detector.


[later]


it is well known that one can make specific kinds of measurements
below the resolution limit of an instrument,


Joseph, *why* do you keep repeating this silly statement? Many people
make such claims, but it is not valid science or statistics. You can

easily
show me wrong, by directing me to a statistics treatise on how to
perform measurements below the resolution of the instrument used.


N.C.Barford, _Experimental_Measurements:_Precision,_Error,_and_ Truth_.
This is old and elementary, but it's what we used in the version of
"Data Analysis 101" I took 30-some years ago.


Great! Now please provide the reference properly. Page number (or section)
where the text explains how this ability is derived. In other words, where
Barford *explicitly* explains how such methods can be used to go below the
physical resolution of the instrument.

An excerpt would be nice.

I do not disagree with what Joseph Lazio wrote above.


I can't wait for the explanation!

But greywolf42's
lack of knowledge and inability to read have apparently caused him to
think otherwise. This is all well known, and is indeed "Data Analysis
101" -- greywolf42 explicitly displays his ignorance here.


The standard special plead, ad hominem.


Tom Roberts (and Bill Rowe), on the other hand, have many times called
such processes "overaveraging" (at least when it is applied to

experiments
that would otherwise disprove SR). i.e.:
http://www.google.com/groups?selm=vr....supernews.com

"And results reported implying an order of magnitude improvement in
resolution over the best the instrument can achieve are very dubious."


Yes.


Well, this certainly looks different than your claim, above. In the link
above, you were complaining that Miller was providing a measured value of
0.24 fringe, when you agreed that the physical resolution of the device was
0.1 fringe. You were upset about the implication of the second digit.

In the above case, the intensity resolution of the COBE is 1 part in 10,000.
Yet the "variations" are given with an absolute value that is 10 times below
the resolution of the instrument. Which is equivalent to Miller declaring
that he had found a value of 0.024 fringe.

A discussion:

For a basic measurement like that of the width of my desk, a given
technique has a given resolution.


And for a basic measurement like the width of a fringe, or the position of a
star image, a given technique has a given resolution. OK.

For example this meter stick is marked
in millimeters, and I can read it to about 0.2 mm resolution. So using
it to make a single measurement of the desk, I obtain an answer accurate
to ~0.2 mm.


For example, this interferometer is marked in fringes, and I can read it to
about 0.1 fringe resolution. Fore example, this astrometrical CCD is marked
in arc-seconds, and I can read it to about 3 milliarc second resolution.

If I make a series of such measurements that are
STATISTICALLY INDEPENDENT I can improve that accuracy to the limit of
the systematic errors involved, by averaging multiple measurements.


1) Can you support this claim, instead of simply assert it?

Systematic errors do not affect the error bars on the statistical results.
If you know that there is a systematic error, then you redo the experiment.

To
make them statistically independent, in this case I must re-apply the
meter stick to the desk for each measurement (merely re-reading the
scale without repositioning the stick would not give independent
measurements).


Yes, one must actually perform each measurement... not simply count the same
measurement 'n' times.

As is well known, under these conditions, the mean of the
multiple measurements approaches the actual value to within an error
determined by the systematic errors combined with the intrinsic error of
the meter stick (~0.2 mm) divided by the square root of the number of
measurements contributing to the mean.


I don't care if you think that it is "well known." I'm looking for an
actual reference that this is part of physical, statistical theory.

And in Joseph's case, he would be measuring the width of paramecia to be
0.01 mm, using a meter stick. Do you think that this is valid? In the case
of the Hipparcos-light-bending crew, this would be claiming a result of
0.000013 +- .000002 mm (using the meter stick with resolution of 0.2 mm).
Is this valid, Tom?

In this case, some of the systematic errors a
errors in scribing the marks on the meter stick


This isn't "systematic" error. This can be avoided by using a different
meter stick for each measurement, or measuring over different intervals.

optical parallax


This isn't systematic error (the observer can move his eyes around).

temperature difference in the meter stick between its calibration and use


This is not systematic error, for it can be controlled. Unless the
experimenter is not competent.

It should be clear that none of these error sources are affected by
averaging, and they are related to the meter stick's construction and
manner of use.


Yes. And real systematic errors can't be quantified within the process of
the specific experiment.

Now the manufacturer of the meter stick knows about these
systematic errors, and does not make heroic efforts to reduce them below
a human's ability to read and use it, so they are not enormously smaller
than ~0.2 mm. That applies to essentially any instrument.


Yes. So your entire digression into systematic errors was a red herring.

That's why
averaging many readings is highly suspect when someone claims an
improvement of an order of magnitude over the intrinsic resolution of
the instrument.


So, I presume you would agree that claims to 1 part in 100,000 are "highly
suspect", when the intrinsic resolution of the instrument is 1 part in
10,000?

[For instance, wear on the end of the stick can be comparable
to that accuracy. That's why the 0 mark is not at the end.]


In the measurments greywolf42 references above, on which I commented
that they involved overaveraging, the experimenters claimed an
improvement of more than an order of magnitude by averaging.


Which is fine by your method, above, so long as "systematic" errors are less
than the resolution of the instrument.

None of
them could claim their systematic errors were samll enough to justify
that smaller resolution.


Why not, Tom? They didn't have "marking errors", "parallax errors", or
"temperature errors."

Moreover, most of them had a clear human bias
in roundoff, which makes multiple measurements be statistically
correlated,


Please provide a sample of the data that supports your claim. (For example,
evidence of the "sawtooth" bias.) And a measurement of the statistical
bias..

which means that averaging does not improve the actual
resolution of the mean below the amount of roundoff.


No, systematic errors will not change the resolution of the instrument. Nor
will they change the resolution (precision) of the result. Systematic
errors will change the *accuracy* of the result. But this is simply bad
experimental design, and has nothing to do with the statistical "averaging"
process.

For instance, if when reading that meter stick I always
rounded up to the next millimeter,


Then you wouldn't have a theoretical resolution of 0.2 mm -- but only of 1
mm.

it should be clear that
the value I obtain will be larger than the actual value,
and no amount of averaging multiple measurements will
improve the accuracy of the measurement below ~0.5 mm.


But that would simply be a biased experimenter, Tom. Which has nothing to
do with averaging.


And you have nicely avoided the issue. When you use a meter stick that is
(theoretically) precise to 0.2mm, you don't select that instrument to
measure paramecia who's absolute diameter is on the order of 0.01mm. You
use a meter stick to measure objects with characteristic dimensions on the
order of several mm to 1 meter. You want 2 or possibly 3 significant
figures. In Miller's case, you claim his results were only 1 significant
figure, but he claimed two significant figures.

Now, in the Joseph's case, above, we are talking about effects similar to
measuring paramecia with a meter stick. The COBE resolution is 1 part in
10,000 at any given intensity. But the absolute value of the reported
results are 1 part in 100,000 from the background blackbody curve.

--
greywolf42
ubi dubium ibi libertas
{remove planet for return e-mail}



  #6  
Old January 19th 05, 07:30 PM
Randy Poe
external usenet poster
 
Posts: n/a
Default


greywolf42 wrote:

If I make a series of such measurements that are
STATISTICALLY INDEPENDENT I can improve that accuracy to the limit

of
the systematic errors involved, by averaging multiple measurements.


1) Can you support this claim, instead of simply assert it?


This is just what is usually called "standard error of
the mean". The error in the mean of n measurements goes
as sqrt(n).

The theory is elementary. Suppose each measurement X1, X2,...
Xn has a variance of V (so a standard deviation of sqrt(V)).
If the measurements are independent, then the variance
of Xsum = (X1 + X2 + ... + Xn) = (V + V + ... + V) = n*V.

To find the variance of Xmean = Xsum/n, you need to know
that for any constant a and random variable X with variance
Vx, the variance of aX is a^2*Vx.

So the variance of Xmean = Xsum/n is var(Xsum)/n^2 =
n*V/n^2 = V/n.

The standard deviation of Xmean is sqrt(V)/sqrt(n). Take
100 measurements and you reduce the uncertainty in Xmean
by 10. Take 10000 measurements and you reduce it by 100.
- Randy

  #7  
Old January 19th 05, 07:41 PM
Greg Hennessy
external usenet poster
 
Posts: n/a
Default

In article ,
greywolf42 wrote:
Now, in the Joseph's case, above, we are talking about effects similar to
measuring paramecia with a meter stick. The COBE resolution is 1 part in
10,000 at any given intensity. But the absolute value of the reported
results are 1 part in 100,000 from the background blackbody curve.


Well, I see greywolf is back to claiming one part in 10,000 for COBE,
when his reference claimed something else. Of course the value that
Lerner quoted was for the COBE FIRAS instrument, and the value of 1 in
100,000 was from the COBE DMR instrument, as has been explained to
greywolf multiple times. But he seems to find it convenient to ignore
data that conflicts with his worldview.

  #8  
Old January 19th 05, 10:49 PM
JP
external usenet poster
 
Posts: n/a
Default

On Wed, 19 Jan 2005 06:41:53 +0000, Tom Roberts wrote:

greywolf42 wrote:
Joseph Lazio wrote in message
...
This is Data Analysis 101. Let your detector be anything you want it
to be. Let it measure temperature on the sky, volts out of a
voltmeter, whatever. If you take a long data stream from it, you can
easily measure well below the "resolution" of the detector.
[later]
it is well known that one can make specific kinds of measurements
below the resolution limit of an instrument,


Joseph, *why* do you keep repeating this silly statement? Many people make
such claims, but it is not valid science or statistics. You can easily show
me wrong, by directing me to a statistics treatise on how to perform
measurements below the resolution of the instrument used.


N.C.Barford, _Experimental_Measurements:_Precision,_Error,_and_ Truth_.
This is old and elementary, but it's what we used in the version of
"Data Analysis 101" I took 30-some years ago.

Tom,

Can you tell me what level that book is written at? I'm trying to do some
learning in this area, and a good book would be useful...

Thanks

JP
  #9  
Old January 20th 05, 04:26 AM
Tom Roberts
external usenet poster
 
Posts: n/a
Default

JP wrote:
On Wed, 19 Jan 2005 06:41:53 +0000, Tom Roberts wrote:
N.C.Barford, _Experimental_Measurements:_Precision,_Error,_and_ Truth_.
This is old and elementary, but it's what we used in the version of
"Data Analysis 101" I took 30-some years ago.


Can you tell me what level that book is written at? I'm trying to do some
learning in this area, and a good book would be useful...


I dug it out of the back of my bookshelf, blew the dust off, and briefly
thumbed through it before giving it as a reference. It is rather
elementary. I took a course using it as a textbook as either a freshman
or sophomore while at Purdue, majoring in physics; that would be 1971 or
1972. As I said, it's old. It's not alone in that.... It's quite
remarkable that I not only remembered its existence, but also its color,
shape, general appearance, and approximate location; but not its author
or exact title.


Tom Roberts
  #10  
Old January 20th 05, 07:10 AM
Bill Rowe
external usenet poster
 
Posts: n/a
Default

In article ,
"greywolf42" wrote:

Tom Roberts wrote in message
. com...
greywolf42 wrote:


Joseph, *why* do you keep repeating this silly statement? Many
people make such claims, but it is not valid science or
statistics. You can easily show me wrong, by directing me to a
statistics treatise on how to perform measurements below the
resolution of the instrument used.


N.C.Barford, _Experimental_Measurements:_Precision,_Error,_and_ Truth_.
This is old and elementary, but it's what we used in the version of
"Data Analysis 101" I took 30-some years ago.


Great! Now please provide the reference properly. Page number (or section)
where the text explains how this ability is derived. In other words, where
Barford *explicitly* explains how such methods can be used to go below the
physical resolution of the instrument.


This is nothing more than a poor attempt to either get someone to do
your home work for you or shut down a discussion. As Roberts said, this
really is data analysis 101. If you are really so dense as to not
understand, then you certainly should not be employed in any position
where it is necessary to analysis data in a meaningful way.

But greywolf42's
lack of knowledge and inability to read have apparently caused him to
think otherwise. This is all well known, and is indeed "Data Analysis
101" -- greywolf42 explicitly displays his ignorance here.


The standard special plead, ad hominem.


Stating what is clearly demonstrated by your posts is an observation not
ad hominem

snip

If I make a series of such measurements that are
STATISTICALLY INDEPENDENT I can improve that accuracy to the limit of
the systematic errors involved, by averaging multiple measurements.


1) Can you support this claim, instead of simply assert it?


Do you have any familiarity at all with basic statistics? In particular
the central limit theorem? If so, it should be immediately apparent
Tom's claim above is a direct consequence of the central limit theorem.
And if you are not familiar with it pick up any reasonable basic text on
statistics, go to the index or table of contents and find central limit
theorem and turn to the referenced page.

Systematic errors do not affect the error bars on the statistical results.
If you know that there is a systematic error, then you redo the experiment.


True, and this has nothing at all to do with the comment about averaging
independent observations.

To make them statistically independent, in this case I must
re-apply the meter stick to the desk for each measurement (merely
re-reading the scale without repositioning the stick would not give
independent measurements).


Yes, one must actually perform each measurement... not simply count the same
measurement 'n' times.


True, but meaningless as human beings are unable to achieve what is
required. They cannot help but remember what they did moments before and
repeat the measurement in essentially the same way. Hence, repeated
measurements made by humans one after another never really achieve
statistical independence.

As is well known, under these conditions, the mean of the
multiple measurements approaches the actual value to within an error
determined by the systematic errors combined with the intrinsic error of
the meter stick (~0.2 mm) divided by the square root of the number of
measurements contributing to the mean.


I don't care if you think that it is "well known." I'm looking for an
actual reference that this is part of physical, statistical theory.


What Tom states here is nothing more than translating the central limit
theorem into a written procedure. Choose any basic text on statistics.

And in Joseph's case, he would be measuring the width of paramecia to be
0.01 mm, using a meter stick. Do you think that this is valid? In the case
of the Hipparcos-light-bending crew, this would be claiming a result of
0.000013 +- .000002 mm (using the meter stick with resolution of 0.2 mm).
Is this valid, Tom?


In this case, some of the systematic errors a
errors in scribing the marks on the meter stick


This isn't "systematic" error. This can be avoided by using a different
meter stick for each measurement, or measuring over different intervals.


A systematic error is any error that causes a consistent offset in the
mean of the measured values from the mean of the true values. Since the
normal manufacturing process would not be to inscribe each graduation on
a meter stick individually and independently, errors in inscribing
graduations will be systematic errors. And yes, this can be detected by
using a different meter stick made using an independent manufacturing
process. Note, detected not corrected. And, this is never done in
practice unless there is clear evidence of a problem with the existing
meter stick.

optical parallax


This isn't systematic error (the observer can move his eyes around).


Certainly and observer can move his viewpoint. But this isn't much of a
solution in practice. Basically, what one would do is move your
viewpoint until you got what you thought was the best reading. But since
we tend to do things the same way over and over again, you simply trade
one bias for another.

temperature difference in the meter stick between its calibration and use


This is not systematic error, for it can be controlled. Unless the
experimenter is not competent.


No matter how competent and experimenter is there are limits to how well
any environmental factor can be controlled and measured. In the case of
temperature, it is impossible to make buffer against the environment
temperature and have 0 temperature gradient (so that the point at which
you measure temperature is the temperature that is important) at the
same time.

It should be clear that none of these error sources are affected by
averaging, and they are related to the meter stick's construction and
manner of use.


Yes. And real systematic errors can't be quantified within the process of
the specific experiment.


Now the manufacturer of the meter stick knows about these
systematic errors, and does not make heroic efforts to reduce them below
a human's ability to read and use it, so they are not enormously smaller
than ~0.2 mm. That applies to essentially any instrument.


Yes. So your entire digression into systematic errors was a red herring.


That's why averaging many readings is highly suspect when someone
claims an improvement of an order of magnitude over the intrinsic
resolution of the instrument.


So, I presume you would agree that claims to 1 part in 100,000 are "highly
suspect", when the intrinsic resolution of the instrument is 1 part in
10,000?


Do you not understand the difference between saying a measurement is
accurate to 10 ppm because your instrument as an accuracy of 10 ppm and
saying to can average 100 readings to improve the resolution of the of
the instrument by a factor of 10 over the specified resolution of the
instrument? These are two separate and distinct things. A claim of
accuracy of 10 ppm using an instrument specified to have that accuracy
is not suspect. A claim that resolution was improved by a factor of 10
over the specified resolution of the instrument by averaging is "highly
suspect". So suspect as to be considered invalid.

Averaging only improves resolution when measurements are statistically
independent. Repeated measurements by humans don't achieve this.

And statistical independence won't always be enough even if it could be
achieved by eliminating all human bias and systematic error. For
averaging to work its magic, the central limit has to apply. And the
central limit theorem does not apply to all distributions.

rest snipped

--
To reply via email subtract one hundred nine
 




Thread Tools
Display Modes

Posting Rules
You may not post new threads
You may not post replies
You may not post attachments
You may not edit your posts

vB code is On
Smilies are On
[IMG] code is On
HTML code is Off
Forum Jump

Similar Threads
Thread Thread Starter Forum Replies Last Post
Constellation rise times versus hour of darkness .....etc Jim UK Astronomy 5 July 27th 04 07:01 AM
Article re Harvard OSETI w/Horowitz, Tarter, Lazio et al Jason H. SETI 2 May 21st 04 11:17 PM
James Harris versus |-|erc versus OM James Harris Space Shuttle 0 August 1st 03 09:01 AM


All times are GMT +1. The time now is 06:19 AM.


Powered by vBulletin® Version 3.6.4
Copyright ©2000 - 2025, Jelsoft Enterprises Ltd.
Copyright ©2004-2025 SpaceBanter.com.
The comments are property of their posters.