View Single Post
  #2  
Old August 31st 16, 09:52 PM posted to sci.astro.research
Phillip Helbig (undress to reply)[_2_]
external usenet poster
 
Posts: 273
Default Statistics Problem/Question

In article ,
"Robert L. Oldershaw" writes:

Say you had 30 low-mass red dwarfs (unbiased sample) whose masses are
all measured to +/- 0.01 solar mass. Say you added all 30 masses and
divided by 0.145 solar mass.

What would the approximate (or exact if you prefer) probability be for
finding an exact multiple?


The probability for finding an EXACT multiple is zero.

What would be the expected probability for a
deviation of +/- 0.001 solar mass,


This doesn't make any sense if, as stated above, the "masses are all
measured to +/- 0.01 solar mass".

and finally what would be the
expected probability for a deviation of +/- 0.01 solar mass?


Presumably, you want to test whether the null hypothesis that the masses
are "random" can be ruled out, and whether there is some evidence for
0.145 being some sort of preferred value.

The number 30 doesn't matter. The sum of any number of "random" numbers
is itself "random".
The probability that the sum deviates from an integer multiple of 0.145
by less than 0.0725 is 100%, and the probability that it deviates by
more is 0. This is because 0.0725=0.145/2 is the maximum possible
deviation.

If the masses are measured to 0.01 solar masses, then you can forget
about detecting any smaller deviations.

It doesn't make sense to talk about "a deviation of +/- 0.01 solar
mass", since this probability is 0. What you might mean is the
probability that the deviation is less than 0.01. That would be roughly
14% (0.01/0.0725).

Suppose you know the masses to 0.001 instead of 0.01. Then the
probability of the deviation being less than 0.001 is about 1.4%. If
you actually found this, it would be considered marginal evidence in
favour of ruling out the null hypothesis.

A more detailed analysis would also put an error bar on the sum (add the
individual errors quadratically).