A Space & astronomy forum. SpaceBanter.com

Go Back   Home » SpaceBanter.com forum » Astronomy and Astrophysics » Research
Site Map Home Authors List Search Today's Posts Mark Forums Read Web Partners

Southworth Det Eclips Binary Catalog



 
 
Thread Tools Display Modes
  #21  
Old December 2nd 13, 07:00 PM posted to sci.astro.research
Robert L. Oldershaw
external usenet poster
 
Posts: 617
Default Southworth Det Eclips Binary Catalog

On Sunday, December 1, 2013 3:51:50 PM UTC-5, Phillip Helbig---undress to reply wrote:

Presumably 0.7 is a typo for 0.07.



Yes, typo. And presumably so are your "[t]o" and "[o].07".




Where to the numbers 0.05, and o.07 come from? Why these and not
others?


I am comparing the 16 counts in the central [0.04 sm width] with the 4 counts in the two "wings" [2 times 0.02 sm width, total 0.04 sm].

I believe that this shows that the distribution is not uniform.

FOUR TO ONE
  #22  
Old December 2nd 13, 07:21 PM posted to sci.astro.research
Robert L. Oldershaw
external usenet poster
 
Posts: 617
Default Southworth Det Eclips Binary Catalog

On Sunday, December 1, 2013 1:42:31 PM UTC-5, Robert L. Oldershaw wrote:
On Sunday, December 1, 2013 3:57:51 AM UTC-5, Robert L. Oldershaw wrote:

[Mod. note: if a professional astrophysicist has made the claims that

you were making, please give references to the relevant publication in

a refereed journal -- mjh]


Martin, your request is a bit vague regarding "claims". If you are
talking about the white dwarf mass distributions with definite peaks
where I said they would be and where professional astrophysicists have
found them (and without extraneous peaks, I might add), then consider
the following.

From the many papers on this subject to choose from, you could look at
Tremblay, Bergeron and Gianninas, "An Imporoved Spectroscopic Analysis
Of DA White Dwarfs...[SDSS DR4]", available 24/7 at arXiv.org for
free.

See Figure 7 and the discussion of the peaks at 0.4-0.45 solar mass
and approximately 0.58 solar mass.

Those who study white dwarf mass spectra are fully aware that they are
not uniform, but have definite peaks that show up repeatedly. Discrete
Scale Relativity PREDICTED these peaks and can explain these peaks,
while conventional models did NOT predict these peaks and struggle to
find an explanation. It is amusing to see them try to explain ELM
white dwarfs with masses more in the vicinity of 0.15 solar mass.
  #23  
Old December 2nd 13, 09:32 PM posted to sci.astro.research
Phillip Helbig---undress to reply
external usenet poster
 
Posts: 629
Default Southworth Det Eclips Binary Catalog

In article , "Robert L.
Oldershaw" writes:

On Sunday, December 1, 2013 3:52:47 PM UTC-5, Phillip Helbig---undress to reply wrote:

But your unit is 0.145 solar masses! Again, if the bin width is larger

than your unit, then ANY bin you choose will have at least one of your

multiples in it.

---------------------------------------

I just looked at all the samples presented at my website. THE BIN
WIDTHS ARE 0.05 SOLAR MASS. This is sufficient to show where white
dwarf mass peaks are AND where they are NOT.

The published histograms usually have slightly narrower binning.

PLEASE STOP POSTING FALSE AND MISLEADING INFORMATION!


Sorry, honest mistake. I was thinking 0.0145 intead of 0.145.
  #24  
Old December 2nd 13, 09:33 PM posted to sci.astro.research
Martin Hardcastle
external usenet poster
 
Posts: 63
Default Southworth Det Eclips Binary Catalog

In article ,
Robert L. Oldershaw wrote:
It is also instructive to compare the number of systems with
deviations between about -0.02 solar mass and about +0.02 solar mass
(about 16, I think) and compare that with the COMBINED distributions
[-0.05 to -0.7 solar mass] and [0.05 to 0.07 solar mass] (4-6?).

Is that not an interesting piece of information? You will have to tell
me because of my putative incompetence in matters statistical.


No.

Firstly, the number of objects in this range (15) is not unexpected
under a Poisson distribution. If you pick 32 objects from a uniform
distribution, you'll get 15 of them in about 1/3 of the range a
non-negligible fraction of the time (a few per cent). We would need a
much higher probability under the null hypothesis for the result to
look interesting. This is simple Poisson statistics which high-school
students should be able to do, though a computer helps. (And, to
forestall quibbling, the answer does not depend markedly on whether
the number we are talking about 15 or 16, 30 or 32, and so on.)

Secondly, since you've arbitrarily picked the range to look at, it's
very much less interesting even than that -- as Phillip pointed out,
you need some a priori choice of the hypothesis to test, rather than
one that's fine-tuned to get the 'best' result. (The same could be
said of arbitrary choices of what data to look at in the first place!)

Thirdly, and this is the critical point, the error bars on these data
show that they are completely inconsistent with the predictions of
your model, at a very high confidence level, as shown by the chi^2
test. This is the test that actually matters -- your prediction is
that these dispersions are *zero* within the errors and they are
clearly not consistent with that. Your model is thus very strongly
ruled out by these data. Your only way out -- and I presume this is
what you will now do -- is to claim that the data are much less good
than they purport to be and that therefore the model hasn't been properly
tested. (Unless, of course, you want to do what a good scientist would
do and discard a model which has failed every quantitative test that
it's been set.)

Martin
--
Martin Hardcastle
School of Physics, Astronomy and Mathematics, University of Hertfordshire, UK
Please replace the xxx.xxx.xxx in the header with herts.ac.uk to mail me
  #25  
Old December 2nd 13, 09:34 PM posted to sci.astro.research
Martin Hardcastle
external usenet poster
 
Posts: 63
Default Southworth Det Eclips Binary Catalog

In article ,
Robert L. Oldershaw wrote:
I strongly disagree with your characterization of my choice of the
newer data for a valid sample. Science improve; that is its very
nature. The mass distribution of stellar masses (and most other
measured parameters in science) will improve in precision, and
especially in ACCURACY, with time. This is self-evident and has a vast
historical body of supporting evidence. I think you are wrong on this
issue and that I am clearly right.


This would be quite amusing if you weren't serious.

I've been working in observational astronomy for 20 years and I assure
you that it is perfectly possible for a result to be published at time
t2 which is of poorer quality than one published at time t1, where
t1t2, for any number of reasons:

-- maybe all the easy objects were done earlier and newer work is
scrabbling around in the noise:
-- maybe the field is no longer sexy and people are struggling to get
time on big telescopes
-- maybe the field is very sexy and people are rushing out results
that lack the integration time of earlier work
-- maybe nobody's getting any telescope time at all and so back-catalogue
data is being brought out and dusted off. (Publication date is not
observation date!)
-- maybe the one killer instrument for this work was turned off in
2010.

All of these things happen in real life. I don't know whether any of
them has happened in this case, but the point is that *nor do you*. To
discard earlier data (assuming you want to be taken seriously) you
can't apply a cutoff to the publication date (the _publication date_!)
-- you actually need to present some argument for each dataset that
you want to reject. You clearly don't have such an argument or you
wouldn't be resorting to bluster about clearly false statements being
'self-evident'. Which might lead an objective reader to wonder whether
you just fiddled with the arbitrary cutoff date until you found a
distribution that you thought was interesting --- thus completely
vitiating any statistical significance that it might have had.

Martin
--
Martin Hardcastle
School of Physics, Astronomy and Mathematics, University of Hertfordshire, UK
Please replace the xxx.xxx.xxx in the header with herts.ac.uk to mail me
  #26  
Old December 3rd 13, 08:23 AM posted to sci.astro.research
wlandsman
external usenet poster
 
Posts: 43
Default Southworth Det Eclips Binary Catalog

On Sunday, December 1, 2013 3:57:51 AM UTC-5, Robert L. Oldershaw wrote:

Well, I would urge open-minded readers to look at the multiple samples
of published white dwarf mass distributions that I have put at
http://www3.amherst.edu/~rloldershaw in the page entitled "Stellar
Scale Discreteness?". If you do not see peaks at the predicted mass
multiples, then there must be something obscuring your vision.


The mass distribution of main-sequence stars is extremely well modeled
by a continuous power-law. However, Robert is correct that the white
dwarf mass distribution shows peaks at certain masses. The strong peak
at 0.6 solar masses (Msun) has been understood for more than 50 years,
while possible weak peaks around 0.4 and 0.8 Msun may be understood
with the development of binary evolution population synthesis models.

White dwarfs are the endpoints of stellar evolution. Because of mass
loss during the red giant phases, white dwarfs are expected to have
significantly less mass than their progenitor star. For example, the
Sun is expected to end up as a white dwarf with a mass of 0.6 Msun.
Astronomers use a initial-mass - final mass relation (determined by
observations of open clusters) to predict the final white dwarf mass
for stars of different initial masses.

So here are some factors that determine the strange shape of the white
dwarf mass function.

1. The white dwarf mass distribution is truncated at both ends. Above
1.4 Msun, the electron degeneracy pressure in a white dwarf cannot
support itself against collapse to a neutron star. Below about 0.53
Msun, the progenitor stars have not had sufficient time to evolve off
of the main-sequence (via single star evolution) within the age of the
Galaxy.

2. White dwarfs passively cool and become dimmer with age until they
become undetectable. An optical survey like SDSS preferentially
detects the younger and brighter white dwarfs, which have spent much
less time as a white dwarf ( 100 Myr) than they did as a main
sequence star. Thus the white dwarf mass function is not only related
to the main-sequence mass function but also to the star formation
history. If there was a burst of star formation 2 Gyr ago, then we
would see a peak in the white dwarf mass distribution for progenitor
stars with a 2 Gyr lifetime. In open clusters (with all stars the same
age) all the white dwarfs have the same mass..

3. Stars less than 2.5 Msun develop a degenerate helium core. When
this core mass exceeds about 0.5 Msun, the helium core flash occurs,
removing the degeneracy and yielding a core helium burning star. This
core eventually becomes the white dwarf (with an additional ~0.1 Msun
of mass added from the product of subsequent shell burning.) The core
mass at the helium flash depends only weakly on the total mass of the
star, so all stars less than 2.5 Msun become ~0.6 Msun white dwarfs.
This is one of the factors explaining the strong peak at 0.6 Msun for
white dwarf masses.

4. White dwarfs with a mass less than 0.5 Msun have *only* been
found in binary systems. When a red giant expands it can lose mass to
its companion, never reach the helium core flash, and become a core
helium white dwarf (as opposed to the usual carbon/oxygen white
dwarf). Subsequent evolution of the binary can result in the merger of
two white dwarf yielding high mass white dwarfs 0f 0.8 Msun or higher.
Binary population synthesis models suggest that there may be peaks at
certain masses (Isern et al.
adsabs.harvard.edu/abs/2013ASPC..469...711) although it is not clear
if such peaks are present in the observations. (There are a lot of
selection effect I haven't mentioned here.)

[Mod. note: reformatted -- mjh]
  #27  
Old December 3rd 13, 08:25 AM posted to sci.astro.research
Robert L. Oldershaw
external usenet poster
 
Posts: 617
Default Southworth Det Eclips Binary Catalog

On Monday, December 2, 2013 4:33:38 PM UTC-5, Martin Hardcastle wrote:

Firstly, the number of objects in this range (15) is not unexpected

under a Poisson distribution. If you pick 32 objects from a uniform

distribution, you'll get 15 of them in about 1/3 of the range a

non-negligible fraction of the time (a few per cent). We would need a

much higher probability under the null hypothesis for the result to

look interesting. This is simple Poisson statistics which high-school

students should be able to do, though a computer helps. (And, to

forestall quibbling, the answer does not depend markedly on whether

the number we are talking about 15 or 16, 30 or 32, and so on.)

----------------------------------------

I have just found something that really astonishes and intrigues me.

An piece in Physics Today states that the Standard Model predicts the
electron magnetic moment "to an astonishing accuracy of one part in a
trillion", roughly.

It goes on to say that because of the small uncertainty in the value
of the fine structure constant the prediction is 2.8 times less
precise than the measurement. The agreement is "to 1.1 +/- 0.8 parts
per trillion - to within 1.3 standard deviations."

Could someone explain to me how such an accurate measurement could
only be a 1.3 standard deviation result? Not that I will completely
understand since I seem to perceive, analyze and understand nature in
a very different way from regular SAR posters. Still, I'd like to hear
the argument.

Certainly we have been told time and time again that the Standard
Model correctly predicts g to very high accuracy and that this is a
very strong vindication of the SM. We are also told that particle
physics usually demands 5 sigma agreement. Are these statements
contradictory. What gives here???
  #28  
Old December 3rd 13, 08:32 AM posted to sci.astro.research
Robert L. Oldershaw
external usenet poster
 
Posts: 617
Default Southworth Det Eclips Binary Catalog

On Monday, December 2, 2013 4:34:29 PM UTC-5, Martin Hardcastle wrote:

This would be quite amusing if you weren't serious.

I've been working in observational astronomy for 20 years and I assure

you that it is perfectly possible for a result to be published at time

t2 which is of poorer quality than one published at time t1, where

t1t2, for any number of reasons:

---------------------------------------------------

So is your point that scientific precision and accuracy tend to get
worse with time? I don't think so.

[Mod. note: Correct, that is not my point. My point is that they don't
*necessarily* get better in such a way that later data are always
better than earlier ones. A single example of a case where, over a long
period, they have got better does not allow you to infer that in all
cases they will, and you require that for your argument -- mjh]

You argue that sometimes the measurement at t1 is better than a
measurement at t2. [Here I will refrain from a facetious comment].]
Obviously this is a possibility, but in general, accuracy improves
with time given a long enough time span and multiple independent
analyses.

Let's take a real world example that we are all familiar with: the
Hubble Constant. The early estimates were way to high, then they came
down to a more accurate level, then Sandage and de Vaucouleurs battled
over whether 50 or 100 km/sec/Mpc was more accurate, and now we are
measuring it at 75 +/-? km/sec/Mpc.

If accuracy did not generally improve with time, science would be a
fool's game. Right, Martin?

Now what was that about taking ones ideas seriously?
  #29  
Old December 3rd 13, 08:34 AM posted to sci.astro.research
Robert L. Oldershaw
external usenet poster
 
Posts: 617
Default Southworth Det Eclips Binary Catalog

On Monday, December 2, 2013 4:33:38 PM UTC-5, Martin Hardcastle wrote:
In article ,

Firstly, the number of objects in this range (15) is not unexpected

under a Poisson distribution. If you pick 32 objects from a uniform

distribution, you'll get 15 of them in about 1/3 of the range a

non-negligible fraction of the time (a few per cent). We would need a

much higher probability under the null hypothesis for the result to

look interesting. This is simple Poisson statistics which high-school

students should be able to do, though a computer helps. (And, to

forestall quibbling, the answer does not depend markedly on whether

the number we are talking about 15 or 16, 30 or 32, and so on.)


It is interesting what one can do with statistics. One can find an
unpredicted bump on a graph in about the last place left to look for
it, and in spite of a large background signal and substantial error
bars on the points constituting the bump, one can call it the God
Particle with a 5-sigma confidence and claim to have discovered one of
the biggest finds of all times.

Perhaps I need to hire some particle physicists to lend me a hand.

Regarding the hypothesis of discreteness in stellar mass
distributions, I think a much larger sample size is what is needed to
test it in an unbiased way. One by one that sample is growing and I
can wait for decades, nature willing.


Secondly, since you've arbitrarily picked the range to look at, it's

very much less interesting even than that -- as Phillip pointed out,

you need some a priori choice of the hypothesis to test, rather than

one that's fine-tuned to get the 'best' result. (The same could be

said of arbitrary choices of what data to look at in the first place!)



What about going with a bin size of 0.02 solar mass and including all
data? Does that give any statistical indication of a non-uniform
distribution, as seems so visually compelling?


Thirdly, and this is the critical point, the error bars on these data

show that they are completely inconsistent with the predictions of

your model, at a very high confidence level, as shown by the chi^2

test. This is the test that actually matters -- your prediction is

that these dispersions are *zero* within the errors and they are

clearly not consistent with that. Your model is thus very strongly

ruled out by these data. Your only way out -- and I presume this is

what you will now do -- is to claim that the data are much less good

than they purport to be and that therefore the model hasn't been properly

tested. (Unless, of course, you want to do what a good scientist would

do and discard a model which has failed every quantitative test that

it's been set.)


The comment about my "prediction is that these dispersions are *zero*
within the errors and..." has me flummoxed. I think it is quite likely
that not only the measurements fluctuate about a mean, but that the
actual physical masses fluctuate about a mean (or as I have written in
published papers: fluctuate by a small amount about "preferred" mass
values). For a conceptual way to understand this, think of the
situation in quantum mechanics/atomic physics.

I worry that you have interpreted my general model in highly
over-idealized Platonic manner that does not conform to the subtlety
and inherent deterministic chaos/complexity/uncertainty involved in an
infinite discrete fractal model.

I would be happy to discard your version of the model, but not mine,
given current empirical evidence.

Robert L. Oldershaw
http://www3.amherst.edu/~rloldershaw
Discrete Scale Relativity/Fractal Cosmology
  #30  
Old December 4th 13, 07:15 AM posted to sci.astro.research
Jonathan Thornburg [remove -animal to reply][_3_]
external usenet poster
 
Posts: 137
Default statistics (was: Southworth Det Eclips Binary Catalog)

Robert L. Oldershaw wrote:
An piece in Physics Today states that the Standard Model predicts the
electron magnetic moment "to an astonishing accuracy of one part in a
trillion", roughly.

It goes on to say that because of the small uncertainty in the value
of the fine structure constant the prediction is 2.8 times less
precise than the measurement. The agreement is "to 1.1 +/- 0.8 parts
per trillion - to within 1.3 standard deviations."

Could someone explain to me how such an accurate measurement could
only be a 1.3 standard deviation result?


Easy: the standard deviation is very small.

Certainly we have been told time and time again that the Standard
Model correctly predicts g to very high accuracy and that this is a
very strong vindication of the SM. We are also told that particle
physics usually demands 5 sigma agreement. Are these statements
contradictory.


No, these are "just" two slightly different senses of use of the phrase
"an N standard deviation result":

(a) [what's meant when we say that
"particle physics usually demands 5 sigma agreement"]
If we want to claim that a signal is NOT just a statistical
fluctuation in a background, then a common criterion is that
|signal-background| should exceed 5 * the calculated standard
deviation of the background. IF the background is well-modelled
by Gaussian noise (and there's not actually any other signal present),
then such a fluctuation would happy less than 1 time per million
trials.

(b) [what's meant by the "1.3 standard deviation" result you quoted]
If we want to compare an experimental result with a theoretical
prediction, then we need to compare the difference |experiment-theory|
with BOTH the experimental uncertainty and the theoretical uncertainty.
Usually we add these in quadrature, i.e., we compare
|experiment-theory| with the quadrature standard deviation
sigma := sqrt(variance(experiment)+variance(theory)).
If the experimental and theoretical errors are both well-modelled
by Gaussian statistics, then (for example) |experiment-theory|
will exceed 1.3*sigma about 19% of the time. So... there's
nothing particularly remarkable about that 1.3-sigma difference,
i.e., we conclude that the experiment agrees with the theory
to within their mutual errors.

Notice that in sense (a), a BIGGER difference means we've found something
exciting, i.e., a BIGGER difference means we can say we have proven
that there is in fact a signal there over and above the background,
whereas a SMALLER difference means we can only say "we didn't find anything".

In contrast, in sense (b) a BIGGER difference means that we've found
that the experiment and theory disagree, i.e., something is wrong with
one or both of them, while a SMALLER difference means we can say that
the experiment and the theoretical prediction agree to within their
mutual errors.



Not that I will completely
understand since I seem to perceive, analyze and understand nature in
a very different way from regular SAR posters.


Indeed. To improve your understanding, I recommend a careful study of

J. V. Wall and C. R. Jenkins
"Practical Statistics for Astronomers"
Cambridge U.P., (2003)
ISBN-10: 0521456169
ISBN-13: 978-0521456166

--
-- "Jonathan Thornburg [remove -animal to reply]"
Dept of Astronomy & IUCSS, Indiana University, Bloomington, Indiana, USA
"There was of course no way of knowing whether you were being watched
at any given moment. How often, or on what system, the Thought Police
plugged in on any individual wire was guesswork. It was even conceivable
that they watched everybody all the time." -- George Orwell, "1984"
 




Thread Tools
Display Modes

Posting Rules
You may not post new threads
You may not post replies
You may not post attachments
You may not edit your posts

vB code is On
Smilies are On
[IMG] code is On
HTML code is Off
Forum Jump

Similar Threads
Thread Thread Starter Forum Replies Last Post
ASTRO: Lunar eclips too TheCroW Astro Pictures 0 March 4th 07 02:31 PM
ASTRO: Live online streams of total lunar eclips TheCroW Astro Pictures 1 March 4th 07 05:35 AM
Binary Star catalog John Oliver Research 1 March 24th 05 10:52 AM
Which catalog is best? Lucy Research 5 April 27th 04 03:49 PM
Which catalog is best? Lucy Misc 5 April 27th 04 03:49 PM


All times are GMT +1. The time now is 05:36 AM.


Powered by vBulletin® Version 3.6.4
Copyright ©2000 - 2024, Jelsoft Enterprises Ltd.
Copyright ©2004-2024 SpaceBanter.com.
The comments are property of their posters.