A Space & astronomy forum. SpaceBanter.com

Go Back   Home » SpaceBanter.com forum » Astronomy and Astrophysics » Astronomy Misc
Site Map Home Authors List Search Today's Posts Mark Forums Read Web Partners

French's Primordial Study and Schramm & Turner, 1997



 
 
Thread Tools Display Modes
  #1  
Old February 1st 04, 07:24 PM
greywolf42
external usenet poster
 
Posts: n/a
Default French's Primordial Study and Schramm & Turner, 1997

In a thread entitled "French's Primordial Study", Ned provided a reference
for 'current data' that is 'more definitive' than the French study that
failed to support Ned's position on primordial isotope generation.

http://www.google.com/groups?selm=vk....supernews.com

However, at the time I was swamped, and had no time to divert from the main
issue. The reference faded from my mind with time. I've since run across
French again, and it's now time to address Ned's claims about his reference.

{snip discussion of French paper never addressed by Ned Wright}

Claim made by Ned Wright:
===========================
"The current data is much more definitive. For example, see
Figure 4 of Schramm and Turner, http://arXiv.org/abs/astro-ph/9706069."

"One dataset gives Y_p = 0.232+/-0.003(stat)+/-0.005(sys); while another
gives Y_p = 0.243+/-0.003(stat). The systematic errors affect the
scaling of Y_p but not the level at which the zero-intercept model is
rejected. With this data the helium proportional to oxygen model is
rejected by more than 80 standard deviations."
===========================

The quote from S&T is:
"There have been two recent determinations of the primeval abundance based
upon the He/H ratio measured in regions of hot, ionized gas (HII regions)
found in metal-poor, dwarf emission-line galaxies. Using one sample and
extrapolating to zero metallicity, Olive and Steigman [33] infer YP = 0.232
+- 0.003 (stat) +- 0.005 (sys); using a new sample of objects Izotov et al
[34] infer YP = 0.243 +- 0.003 (stat). Both data sets are shown in Fig. 4.
In brief, the current situation is ambiguous, both as to the primeval 4He
abundance and as to the consistency of the big-bang prediction."


I got a good laugh out of the combination of the title: "Big Bang
Nucleosynthesis Enters the Precision Era" and Figure 4.

Figure 4 is described as follows: "Helium-4 abundance vs. oxygen abundance
in metal-poor, dwarf emission-line galaxies. Right panel (triangles) is the
sample analyzed by Olive and Steigman [33]; left panel (circles) is the new
sample of Izotov et al [34]." Both figures are pure shotgun scatter
measurements, where every uncertainty bar dwarfs the total range of the
data. The data is more scattered than similar data (taken 15 years earlier)
from French or Peimbert and Torres-Peimbert (French's references). Olive
and Steigman's data shows no obvious trend (hovering between .24 and .26 for
all values of metallicity), while Izotov et al shows a trend down to
..22 for lower metallicities.

[33] K.A. Olive and G. Steigman, Astrophys. J. (Suppl.), (1995).
[34] Y. Izotov, T.X. Thuan, and V.A. Lipovetsky, Astrophys. J. Suppl. 108, 1
(1997).

The S&T paper actually is a poor source for experimental primordial He
values. The actual sources of the data are Olive and Steigman and Izotov et
al. Of these two sets of carefully-selected data, S&T have this to say:
"Turning to the data themselves; the two samples are in general agreement,
except for the downturn at the lowest metallicities which is seen in the
data analyzed by Olive and Steigman. (Skillman has recently also expressed
concern about the use of the lowest metallicity object, IZw18 [38].)"

Izotov states: "The galaxy I Zw 18 ... 0930+554 is of special interest,
since it is the most metal-deficient BCG known." But despite this
importance, Izotov excludes it: "We find that the most metal-deficient BCG
known, IZw 18, cannot be used for this purpose because of its abnormally low
He I line intensities." In other words, efforts are made to exclude
discrepant low-range data. The results are in 'general agreement' only
because the error bars swamp the data.


I may as well look at the sources of the data in S&T, since S&T is a
primarily theoretical paper.......


On the paper of Izotov et al, 1997
= = = = = = = = = = = = = = = = = = = = = = = = = = =
Only one galaxy is shared between the French study and Izotov (I Zw 18 or
0930+554). French finds an He value of .052 (about 16%). Interestingly,
the only galaxy shared is is the lowest He value of the 14 galaxies plotted
on French's figure 6. (Izotov also excludes the other low-metallicity
galxies used in French.)

Both French and Izotov provide oxygen values for this galaxy. Izotov in
Table 4 provides 7.22 +- .01*, or a chemical abundance of 1.7E-5 (with a 5%
error). French provides 1.8E-5 chemical abundance (with a 10% error). So
French and Izotov agree on oxygen (the primary heavy element marker). This
is not surprising -- as Izotov is using the same methodology used by French,
15 years earlier. Thus, we cannot simply ignore the prior French values,
simply because they are older.

*Log N(x) with H == 12.00

Isotov notes that he has thrown out some data when calculating his He
results:

"Our sample contains a large number of low-metallicity
galaxies with spectra obtained and reduced in a homoge-
neous way. We combine the data in the present paper with
the data in to improve statistics and increase the Paper I
range of oxygen and nitrogen abundances for regression
fitting to determine the primordial helium abundance.
However, we have not included 10 H II regions from the
present sample, using the following rejection criteria :"

So, Isotov is going to throw away data to 'improve statistics.' Classic
data selection. But let's see if there's any real rationale (i.e. errors in
the data).

"1. The H II region is faint and its spectrum is too noisy
for helium abundance determination. Using this criterion,
we rejected the galaxies 0749+568, 0749+582, 0907+543,
0943+561A, and 1116+583B."

Well, faint and noisy signals are valid reasons -- so long as the criteria
are determined before He values are calculated. However, if it is done to
arbitrarily 'improve' the chi-squared result (as Izotov admits), then this
is a textbook case of what Babbage calls 'clipping' the data. And looking
at Table 4 clearly shows that these regions have lower statistical error
from noise than other regions that Isotove kept.

"2. There is a large spread in the individual determinations
of the ionic helium abundance from the He I 4471, 5876,
and 6678 lines as compared to the mean value (Table 5).
A galaxy is rejected when the deviation of an individual
determination is more than 20% from the mean value.
Using this criterion, we removed 1358+576, 1441+294,
and 1535+554."

Variations from the mean are not valid reasons to exclude data. This is
another textbook case of 'clipping' data to arbitrarily improve the apparent
statistics.

"3. The galaxy shows strong Balmer underlying absorption
features and weak He I emission lines, which makes the
measurements of He I line intensities difficult. Using this
criterion, we rejected 1319+579B."

As in criterion 1, this may be valid so long as the criteria are determined
before He values are calculated. However, the error bars for this region
for both He and heavy element abundances is below that of galaxy data which
Izotov kept. Hence Izotov is simply 'clipping' another data point to
'improve' his statistics.


But now -- after all the quasi-valid 'clipping' criteria have been
performed, we come to that special case. The one that would disprove the
BBN -- if valid. Izotov creates a special section just to rationalize
removing I Zw 18 from the data pool:

"Finally, we have rejected the BCG I Zw 18. This galaxy has the lowest
oxygen abundance known and has played an important role in the past for the
(Z /50) determination of the primordial helium abundance. However, the He I
line intensities in this galaxy (Table 3) are unusually low as compared to
other very metal-deficient BCGs ... The derived helium mass fraction is
only Y=0.19 in the case of Brocklehurst's (1972) and Y=0.21 in the case of
emissivities Smits's (1996) emissivities (Table 5), significantly lower than
the values derived for other low-metallicity galaxies. ...."

And -- one might add -- significantly below the estimates of French of
0.16 -- which French declares an upper bound.

A VERY long section lists various authors have studied the problem with
care, observation and theory. Yet I Zw 18 refuses to respond to all
efforts. So Izotov concludes: "But these assumptions** are very uncertain,
and as long as they are not well understood, I Zw 18 CANNOT BE USED FOR THE
DETERMINATION OF THE PRIMORDIAL HELIUM ABUNDANCE." (emphasis in original)

** The various theoretical attempts to explain away an observation at odds
with the BBN.


One of the primary reasons for low-mettalicity galaxies is that the lower
the metallicity, the higher the quality of the data -- at least according to
O&S. Thus, deleting the best-studied, but lowest metallicity galaxy is
completely unjustifiable.

After 'clipping' 10 'outlier' observations, this leaves Izotov with only 19
remaining observations. (Izotov has thrown out over a third of his data.)
To make up for this wholesale emasculation of the data set, he brings in 8
results from other papers (without discussing the details). Babbage calls
this 'padding' the data. It's not quite drylabbing, but Isotov has had a
free hand in selecting only those data points that further 'improve' his
statistics.

= = = = = = = = = = = = = = = = = = = = = = = = = = =


On the paper of Olive and Steigman et al, 1995
= = = = = = = = = = = = = = = = = = = = = = = = = = =
Unlike Isotov, O&S do not provide any actual data for direct evaluation.

Olive and Steigman do not obtain their own data. They borrow data from
"Skillman, et al (1994)"***, claiming 49 H II regions, including "11 new,
very metal-poor H II regions." However, O&S's Figures 5 & 6 (the source of
Figure 4 in T&S) only include 41 data points. This is because O&S have also
'clipped' the data -- removing various 'outliers' in order to 'improve' the
statistics. An interesting exception is the case of the eight 'N/O vs O/H
outliers'. O&S have kept these 'outliers,' because they are closer to the
desired He/H vs. O/H (and N/H) fit. And while O&S considers this to be a
bit discrepant from theory, since the inclusion of these 'problem'
observations improve the statistical fit, O&S keeps them.

*** The reference is "Elemental Abundances from Extremely Low Metallicity H
II Regions: A Higher Primordial He Abundance?", Skillman et al, 1994, ADS,
(1994dwga.work..519S). Neither abstract nor paper is available on ADS. Nor
is it found on an arxiv search. Nor was the abstract or content of a
similar paper found on ADS on the same subject. (Terlevich, E.; Skillman, E.
D.; Terlevich, R, "Primordial Helium from Extremely Metal-Poor Galaxies",
The Light Element Abundances, Proceedings of an ESO/EIPC Workshop).
Possibly, this paper was not actually published. If so, there is no
documentation backing up Olive and Steigman's paper.

{Boy this is such an 'improvement' over French. }

O&S simply ignores French's study -- though it lists several other studies
going all the way back to Peimbert and Torres-Peimbert (upon which French
also based his work). This is interesting. Despite the importance that Ned
Wright attached to the French study, both his 'modern' references avoid
French like the plague....

The reason for this is straightforward when you read on: "Virtually all
analyses agree that 0.22 = Yp = 0.24. The problems -- and
disagreements -- arise from the quest for the third significant figure in
Yp." French -- of course -- is one of those irritating results that
necessitated O&S to use the word 'virtually'. This is just another form of
'clipping' of unwanted observations. The attitude is obvious, because if
there is disagreement about the second significant figure (a range of .22 to
..24), then O&S's statement is transparently specious on disagreements of the
'third significant figure.'
= = = = = = = = = = = = = = = = = = = = = = = = = = =


One of the subjects discussed in French (and P&TP) -- but ignored in S&T,
Izotov and Olive and Steigman -- is that the He concentration data is an
*upper bound*. This is a significant oversight in S&T. (French observed
young galaxies with He abundances as low as 11%, and provided reasoning that
these abundances were 'real'. Even though they are far below the
theoretical Big Bang 'primordial' values.)


It is quite clear that the institutional pressure to conform the BBN model
is overwhelming. French began the process by simply avoiding his own data
when calculating the metallicity slope (back-calculating from the BBN
theory). The three later authors (S&T, O&S and Izotov) are more devious --
simply clipping discrepant data, and padding the dataset when clipping alone
won't provide sufficient adjustment.


So, it seems that Ned's claims about 'more definitive' recent work is even
more at sea than his claims about French.

Courtesy copy provided to Ned Wright.

--
greywolf42
ubi dubium ibi libertas
{remove planet for return e-mail}








  #2  
Old June 20th 04, 06:59 AM
Joseph Lazio
external usenet poster
 
Posts: n/a
Default French's Primordial Study and Schramm & Turner, 1997

[greywolf has done an exhaustive analysis of the various papers. I
have not taken the time to pore through all of the papers he cites, as
they are quite long (and I'm behind on fifty other things), but I did
want to comment on a few things. I should also warn that I am not an
expert on elemental abundance measurements or optical spectral line
observations, so my ability to discuss this in any great detail will
probably be limited.]

"g" == greywolf42 writes:


g In a thread entitled "French's Primordial Study", Ned provided a
g reference for 'current data' that is 'more definitive' than the
g French study that failed to support Ned's position on primordial
g isotope generation.
[...]

g Figure 4 is described as follows: "Helium-4 abundance vs. oxygen
g abundance in metal-poor, dwarf emission-line galaxies. Right panel
g (triangles) is the sample analyzed by Olive and Steigman [33]; left
g panel (circles) is the new sample of Izotov et al [34]." Both
g figures are pure shotgun scatter measurements, where every
g uncertainty bar dwarfs the total range of the data.

Of course the fact that the uncertainty on every point is larger than
the total range of the data does not prevent one from determining the
mean of the data fairly well. In general, if the typical uncertainty
on a datum is s, then the uncertainty on the mean of the data derived
from N data is s/\sqrt(N). For data samples containing, say, 20 data,
that means that the mean can be derived with about 5 times less
uncertainty than the individual data.

g The data is more scattered than similar data (...) from French or
g Peimbert and Torres-Peimbert (...). Olive and Steigman's data
g shows no obvious trend (hovering between .24 and .26 for all values
g of metallicity), while Izotov et al shows a trend down to .22 for
g lower metallicities.

Although you ridicule this as not being precision cosmology, it's
worth paying attention to what's being discussed. We're arguing about
whether a quantity is around 0.25 or 0.22, in other words about 10%
effects. Some of us remember when cosmological arguments were over
200% discrepancies (like, Is the Hubble constant 50 km/s/Mpc or 100
km/s/Mpc?). Getting down to the 10% level shouldn't be taken lightly.

[...]
g Izotov states: "The galaxy I Zw 18 ... 0930+554 is of special
g interest, since it is the most metal-deficient BCG known." But
g despite this importance, Izotov excludes it: "We find that the most
g metal-deficient BCG known, IZw 18, cannot be used for this purpose
g because of its abnormally low He I line intensities." In other
g words, efforts are made to exclude discrepant low-range data. The
g results are in 'general agreement' only because the error bars
g swamp the data.

Actually, Izotov et al. state at least two potential systematic
effects that could produce anomalously low He I line intensities. I
haven't tracked down all of their references, but, taken at face
value, it's not obvious to me that they are "cherry picking" their data.


[...]
g Isotov notes that he has thrown out some data when calculating his
g He results: "... we have not included 10 H II regions from the
g present sample, using the following rejection criteria:"

g So, Isotov is going to throw away data to 'improve statistics.'
g Classic data selection. But let's see if there's any real
g rationale (i.e. errors in the data).

Before going perjorative, one should see if there are reasonable
rationale. Let's take case #1.

g "1. The H II region is faint and its spectrum is too noisy for
g helium abundance determination. Using this criterion, we rejected
g the galaxies 0749+568, 0749+582, 0907+543, 0943+561A, and
g 1116+583B."

g Well, faint and noisy signals are valid reasons -- so long as the
g criteria are determined before He values are calculated. However,
g if it is done to arbitrarily 'improve' the chi-squared result (as
g Izotov admits), then this is a textbook case of what Babbage calls
g 'clipping' the data. And looking at Table 4 clearly shows that
g these regions have lower statistical error from noise than other
g regions that Isotove kept.

I can find nowhere where Izotov et al. admit that they reject these
galaxies solely to improve the fit, as you claim. I also find nowhere
that they claim that they reject these galaxies after calculating the
He values, as you imply. Moreover, doing spot checks on Table 4, I
reach the exact opposite conclusion that you do: the uncertainties on
the abundances for these galaxies are systematically higher than other
galaxies in their sample. One might wonder, why bother throwing them
out, because their uncertainties seem to be so large that they would
contribute little to the final result. OTOH, if they contribute
little to the final result, what's the point of using them?

[...]
g The reason for this is straightforward when you read on: "Virtually
g all analyses agree that 0.22 = Yp = 0.24. The problems -- and
g disagreements -- arise from the quest for the third significant
g figure in Yp." French -- of course -- is one of those irritating
g results that necessitated O&S to use the word 'virtually'. This is
g just another form of 'clipping' of unwanted observations. The
g attitude is obvious, because if there is disagreement about the
g second significant figure (a range of .22 to .24), then O&S's
g statement is transparently specious on disagreements of the 'third
g significant figure.'

I'll remind the reader that disagreements about the second digit still
amount to arguing about a 10% effect.

Somehow I'm missing the big picture in all of this. Suppose, for the
sake of the argument, that I say 0.2 Yp 0.25. That includes all
analyses, right? The Big Bang nucleosynthesis (BBN) prediction sits
in the middle of this range. Moreover, the BBN value is easy to
understand, as helium arises from fusion of hydrogen, which occurs
either in stellar cores or for a brief instant in the early Universe.

Now suppose that the Big Bang is wrong. Then what's Yp? Couldn't it
be anywhere in the range 0 Yp 1? Doesn't it seem just a bit weird
that, if the Big Bang model is wrong, that the Yp value just happens
to be about what the BB predicts?

--
Lt. Lazio, HTML police | e-mail:
No means no, stop rape. |
http://patriot.net/%7Ejlazio/
sci.astro FAQ at http://sciastro.astronomy.net/sci.astro.html
  #3  
Old June 22nd 04, 12:37 AM
greywolf42
external usenet poster
 
Posts: n/a
Default French's Primordial Study and Schramm & Turner, 1997

Joseph Lazio wrote in message
...

[greywolf has done an exhaustive analysis of the various papers. I
have not taken the time to pore through all of the papers he cites, as
they are quite long (and I'm behind on fifty other things), but I did
want to comment on a few things. I should also warn that I am not an
expert on elemental abundance measurements or optical spectral line
observations, so my ability to discuss this in any great detail will
probably be limited.]


I'm always glad to receive reasoned counter-arguments on my posts. No
matter how long the delay. (4 1/2 months is a bit longer than usual.) Take
your time and feel free to comment further whenever (and if ever) you get
around to it.

"g" == greywolf42 writes:


g In a thread entitled "French's Primordial Study", Ned provided a
g reference for 'current data' that is 'more definitive' than the
g French study that failed to support Ned's position on primordial
g isotope generation.
[...]

g Figure 4 is described as follows: "Helium-4 abundance vs. oxygen
g abundance in metal-poor, dwarf emission-line galaxies. Right panel
g (triangles) is the sample analyzed by Olive and Steigman [33]; left
g panel (circles) is the new sample of Izotov et al [34]." Both
g figures are pure shotgun scatter measurements, where every
g uncertainty bar dwarfs the total range of the data.

Of course the fact that the uncertainty on every point is larger than
the total range of the data does not prevent one from determining the
mean of the data fairly well. In general, if the typical uncertainty
on a datum is s, then the uncertainty on the mean of the data derived
from N data is s/\sqrt(N). For data samples containing, say, 20 data,
that means that the mean can be derived with about 5 times less
uncertainty than the individual data.


This is theoretically true, but only in Bayesian statistics. My point is
that there is no support in such a noisy distribution for a linear fit.
Certainly one can impress a linear fit to the data. However, there is no
experimental support *FOR* the linear fit in this case. One can draw *any*
straight line they wish through a shotgun scatter plot -- and get
'uncertainty of the mean'.

g The data is more scattered than similar data (...) from French or
g Peimbert and Torres-Peimbert (...). Olive and Steigman's data
g shows no obvious trend (hovering between .24 and .26 for all values
g of metallicity), while Izotov et al shows a trend down to .22 for
g lower metallicities.

Although you ridicule this as not being precision cosmology, it's
worth paying attention to what's being discussed.


My ridicule was earlier. Here I'm simply pointing out that the 'newer' data
is far more noisy than prior studies.

We're arguing about
whether a quantity is around 0.25 or 0.22, in other words about 10%
effects.


Ah, but that's the point. We *aren't* simply discussing a theoretical
property value of .22 or .25. We are discussing the actual data that is
below .20 (down to .11). The data does not support the BB at all. Let
alone a 'high precision' claim.

This is one of the problems of not having taken the time to go back to the
original papers. If you've read the original papers, you see that Yp *was*
well below .20. And you can see the theoretical angst as observation fails
to back up the Big Bang .... again.

Indeed, the primary purpose of this post was to follow up on Ned Wright's
(false) claim about the superiority of the newer studies over the French
(and Peimbert and Torres-Peimbert) studies. It is self-evident that from
data precision standpoint, the prior studies are superior to the 'newer'
ones.

Some of us remember when cosmological arguments were over
200% discrepancies (like, Is the Hubble constant 50 km/s/Mpc or 100
km/s/Mpc?). Getting down to the 10% level shouldn't be taken lightly.


It should be taken lightly (and even with ridicule) if that claim to 10% is
actually a pure fiction (that arises solely from data selection). And it
is, in the above claims, because the actual number is a factor of 100% too
low for the BB. "Unwanted" data has simply been thrown out.

[...]


I'm curious why you felt compelled to snip the citations to the papers that
we are discussing.

[33] K.A. Olive and G. Steigman, Astrophys. J. (Suppl.), (1995).
[34] Y. Izotov, T.X. Thuan, and V.A. Lipovetsky, Astrophys. J. Suppl. 108, 1
(1997).


g Izotov states: "The galaxy I Zw 18 ... 0930+554 is of special
g interest, since it is the most metal-deficient BCG known." But
g despite this importance, Izotov excludes it: "We find that the most
g metal-deficient BCG known, IZw 18, cannot be used for this purpose
g because of its abnormally low He I line intensities." In other
g words, efforts are made to exclude discrepant low-range data. The
g results are in 'general agreement' only because the error bars
g swamp the data.

Actually, Izotov et al. state at least two potential systematic
effects that could produce anomalously low He I line intensities. I
haven't tracked down all of their references, but, taken at face
value, it's not obvious to me that they are "cherry picking" their data.


Of course, if one takes a claim at face value, you won't question their
claim. What you fail to consider is that the other 'systematic effects'
affect *ALL* their galaxies. They only throw out the one that disproves
their theory. The best-studied, most metal deficient BCG known. One
shouldn't throw out the 'best' data simply because it disproves your pet
theory.


[...]


g Isotov notes that he has thrown out some data when calculating his
g He results: "... we have not included 10 H II regions from the
g present sample, using the following rejection criteria:"


{A horribly improper ellipsis. See below.}

g So, Isotov is going to throw away data to 'improve statistics.'
g Classic data selection. But let's see if there's any real
g rationale (i.e. errors in the data).

Before going perjorative, one should see if there are reasonable
rationale.


Throwing out data *is* classic data selection. In my view, it *always*
deserves the perjorative connotation.

Let's take case #1.

g "1. The H II region is faint and its spectrum is too noisy for
g helium abundance determination. Using this criterion, we rejected
g the galaxies 0749+568, 0749+582, 0907+543, 0943+561A, and
g 1116+583B."

g Well, faint and noisy signals are valid reasons -- so long as the
g criteria are determined before He values are calculated. However,
g if it is done to arbitrarily 'improve' the chi-squared result (as
g Izotov admits), then this is a textbook case of what Babbage calls
g 'clipping' the data. And looking at Table 4 clearly shows that
g these regions have lower statistical error from noise than other
g regions that Isotove kept.

I can find nowhere where Izotov et al. admit that they reject these
galaxies solely to improve the fit, as you claim.


I provided the explicit quote, and you elided it. Now you reword my claim,
and in turn claim that you can't find such a quote. Here is the full quote:
==================
Isotov notes that he has thrown out some data when calculating his He
results:

"Our sample contains a large number of low-metallicity
galaxies with spectra obtained and reduced in a homoge-
neous way. We combine the data in the present paper with
the data in to improve statistics and increase the Paper I
range of oxygen and nitrogen abundances for regression
fitting to determine the primordial helium abundance.
However, we have not included 10 H II regions from the
present sample, using the following rejection criteria :"
==================

Note the words "to improve statistics". The only reason that one *removes*
data 'to improve statistics' is to arbitrarily improve the apparent
chi-squared result by removal of outliers.

I also find nowhere
that they claim that they reject these galaxies after calculating the
He values, as you imply.


Your statement is true. However, it is ludicrous to imagine that they did
not perform a calculation of He values for every galaxy. Especially when
the discuss the uncertainty in the He values for every galaxy.

Moreover, doing spot checks on Table 4, I
reach the exact opposite conclusion that you do: the uncertainties on
the abundances for these galaxies are systematically higher than other
galaxies in their sample.


This is not the opposite of my conclusion. Indeed, it says nothing
whatsoever about my conclusion.

One might wonder, why bother throwing them
out, because their uncertainties seem to be so large that they would
contribute little to the final result. OTOH, if they contribute
little to the final result, what's the point of using them?


If one uses them, you'll note that the calculated primordial He value
becomes lower. The lower the primoridal He, the more problem for BB theory.
It is not the 'uncertainty' in the data that caused these galaxies to be
arbitrarily excluded. It is the fact that including them helps give the
'wrong' answer for BB theory.

[...]
g The reason for this is straightforward when you read on: "Virtually
g all analyses agree that 0.22 = Yp = 0.24. The problems -- and
g disagreements -- arise from the quest for the third significant
g figure in Yp." French -- of course -- is one of those irritating
g results that necessitated O&S to use the word 'virtually'. This is
g just another form of 'clipping' of unwanted observations. The
g attitude is obvious, because if there is disagreement about the
g second significant figure (a range of .22 to .24), then O&S's
g statement is transparently specious on disagreements of the 'third
g significant figure.'

I'll remind the reader that disagreements about the second digit still
amount to arguing about a 10% effect.


And I'll remind the reader that the argument is *not* simply between .22 and
..24. But that there exist documented, observed galaxies with .11. Which -
theoretically - cannot exist. And are excluded from Isotov and similar
studies *solely* because of this contradiction to theory.

You are working from the admitted handicap of not reading the background
papers.

Somehow I'm missing the big picture in all of this. Suppose, for the
sake of the argument, that I say 0.2 Yp 0.25. That includes all
analyses, right?


You are indeed missing the big picture. To see the big picture, one must
read *all* the background papers.

The big picture is that you would have to say that to include all analyses
(and data points), you would show 0.11 Yp *max* .25. Indeed one cannot
support .22 or .25 without going out of one's way to avoid irritating data.

The Big Bang nucleosynthesis (BBN) prediction sits
in the middle of this range.


Yes, the BB ad hoc value for Yp is in the middle of this range. Which is
why none of these later authors will admit either data or prior studies that
are 'too low' for BB theory. They have been consigned to the memory hole.

Moreover, the BBN value is easy to
understand, as helium arises from fusion of hydrogen, which occurs
either in stellar cores or for a brief instant in the early Universe.


The theory that you favor is irrelevant to this discussion of astronomical
observations.

Now suppose that the Big Bang is wrong. Then what's Yp?


If the big bang is 'wrong', then Yp (a theoretical parameter of the BB) does
not exist.

Couldn't it
be anywhere in the range 0 Yp 1? Doesn't it seem just a bit weird
that, if the Big Bang model is wrong, that the Yp value just happens
to be about what the BB predicts?


It's not weird at all. The reported values of Yp have been modified over
and over, by throwing out data and ignoring prior work -- until the BB
theorists are happy.


However, the real universe remains out there. With several known galaxies
well below BB Yp values. Any one of which explicitly disproves the BB.
Because the 'real' Yp *cannot* be any higher than the *lowest* measured He/H
ratio. And *all* of the He/H measurements are upper bounds.

--
greywolf42
ubi dubium ibi libertas
{remove planet for return e-mail}


  #4  
Old June 22nd 04, 09:38 AM
Franz Heymann
external usenet poster
 
Posts: n/a
Default French's Primordial Study and Schramm & Turner, 1997


"greywolf42" wrote in message
...
Joseph Lazio wrote in message
...

[greywolf has done an exhaustive analysis of the various papers.

I
have not taken the time to pore through all of the papers he

cites, as
they are quite long (and I'm behind on fifty other things), but I

did
want to comment on a few things. I should also warn that I am not

an
expert on elemental abundance measurements or optical spectral

line
observations, so my ability to discuss this in any great detail

will
probably be limited.]


I'm always glad to receive reasoned counter-arguments on my posts.

No
matter how long the delay. (4 1/2 months is a bit longer than

usual.) Take
your time and feel free to comment further whenever (and if ever)

you get
around to it.

"g" == greywolf42 writes:


g In a thread entitled "French's Primordial Study", Ned provided

a
g reference for 'current data' that is 'more definitive' than the
g French study that failed to support Ned's position on

primordial
g isotope generation.
[...]

g Figure 4 is described as follows: "Helium-4 abundance vs.

oxygen
g abundance in metal-poor, dwarf emission-line galaxies. Right

panel
g (triangles) is the sample analyzed by Olive and Steigman [33];

left
g panel (circles) is the new sample of Izotov et al [34]." Both
g figures are pure shotgun scatter measurements, where every
g uncertainty bar dwarfs the total range of the data.

Of course the fact that the uncertainty on every point is larger

than
the total range of the data does not prevent one from determining

the
mean of the data fairly well. In general, if the typical

uncertainty
on a datum is s, then the uncertainty on the mean of the data

derived
from N data is s/\sqrt(N). For data samples containing, say, 20

data,
that means that the mean can be derived with about 5 times less
uncertainty than the individual data.


This is theoretically true, but only in Bayesian statistics.


That is incorrect.
Lazio's statement is correct in the case of ordinary old fashioned
least squares analysis.

My point is
that there is no support in such a noisy distribution for a linear

fit.

You are wrong.
If the errors for the individual measurements are known, (as they are
in the case under discussion) a correctly applied least squares fit to
the data will yield no only the values of the parameters, the
uncertainties associated with the errors, but also an estimator as to
the significance of the expression used to parametrise the data.

Certainly one can impress a linear fit to the data. However, there

is no
experimental support *FOR* the linear fit in this case. One can

draw *any*
straight line they wish through a shotgun scatter plot -- and get
'uncertainty of the mean'.


You are once again wrong. A good experimenter will determine the
chi-squared parameter of the fit. If the straight line was not a
statistically valid form for the parametrisation, the value obtained
for chi-squared would tell you so.

[snip]

Franz


  #5  
Old June 22nd 04, 05:17 PM
greywolf42
external usenet poster
 
Posts: n/a
Default French's Primordial Study and Schramm & Turner, 1997

Franz Heymann wrote in message
...

"greywolf42" wrote in message
...
Joseph Lazio wrote in message
...


{snip higher levels}

Of course the fact that the uncertainty on every point is larger
than the total range of the data does not prevent one from
determining the mean of the data fairly well. In general, if the
typical uncertainty on a datum is s, then the uncertainty on the
mean of the data derived from N data is s/\sqrt(N). For data
samples containing, say, 20 data, that means that the mean
can be derived with about 5 times less uncertainty than the
individual data.


This is theoretically true, but only in Bayesian statistics.


That is incorrect.
Lazio's statement is correct in the case of ordinary old fashioned
least squares analysis.


How did you determine that the relationship was linear, Franz? This isn't a
case of simply finding the mean of several measurements of a single value.
(Which I believe Joseph understood, even though he used the improper term
'mean of the data.')

My point is
that there is no support in such a noisy distribution for a linear
fit.


You are wrong.
If the errors for the individual measurements are known, (as they are
in the case under discussion) a correctly applied least squares fit to
the data will yield no only the values of the parameters, the
uncertainties associated with the errors, but also an estimator as to
the significance of the expression used to parametrise the data.


How did you determine that the relationship was linear, Franz?

Certainly one can impress a linear fit to the data. However, there
is no experimental support *FOR* the linear fit in this case. One can
draw *any* straight line they wish through a shotgun scatter plot --
and get 'uncertainty of the mean'.


You are once again wrong. A good experimenter will determine the
chi-squared parameter of the fit. If the straight line was not a
statistically valid form for the parametrisation, the value obtained
for chi-squared would tell you so.


A chi-squared value can be obtained for any straight line drawn through
otherwise random (or even non-random) data. Now, one can pick the 'best' of
the infinite number of fits. But this was not done. The assumption of the
Big Bang was used to determine the line (a Bayesian prior), then selected
data was used (with discordant data thrown out).

At least Joseph was trying to address the science issues. You are simply
frothing at the mouth (as usual).


[snip]



--
greywolf42
ubi dubium ibi libertas
{remove planet for return e-mail}


  #6  
Old June 22nd 04, 10:23 PM
Franz Heymann
external usenet poster
 
Posts: n/a
Default French's Primordial Study and Schramm & Turner, 1997


"greywolf42" wrote in message
...
Franz Heymann wrote in message
...

"greywolf42" wrote in message
...
Joseph Lazio wrote in message
...


{snip higher levels}

Of course the fact that the uncertainty on every point is

larger
than the total range of the data does not prevent one from
determining the mean of the data fairly well. In general, if

the
typical uncertainty on a datum is s, then the uncertainty on

the
mean of the data derived from N data is s/\sqrt(N). For data
samples containing, say, 20 data, that means that the mean
can be derived with about 5 times less uncertainty than the
individual data.

This is theoretically true, but only in Bayesian statistics.


That is incorrect.
Lazio's statement is correct in the case of ordinary old

fashioned
least squares analysis.


How did you determine that the relationship was linear, Franz? This

isn't a
case of simply finding the mean of several measurements of a single

value.
(Which I believe Joseph understood, even though he used the improper

term
'mean of the data.')

My point is
that there is no support in such a noisy distribution for a

linear
fit.


You are wrong.
If the errors for the individual measurements are known, (as they

are
in the case under discussion) a correctly applied least squares

fit to
the data will yield no only the values of the parameters, the
uncertainties associated with the errors, but also an estimator as

to
the significance of the expression used to parametrise the data.


How did you determine that the relationship was linear, Franz?


Very simple. Fit a hypothesis that the relaionship contains a square
term as well and study the values of the fitted parameters and their
associated errors. If your square term is insignificant, its
magnitude will be swamped by its error.

Certainly one can impress a linear fit to the data. However,

there
is no experimental support *FOR* the linear fit in this case.

One can
draw *any* straight line they wish through a shotgun scatter

plot --
and get 'uncertainty of the mean'.


You are once again wrong. A good experimenter will determine the
chi-squared parameter of the fit. If the straight line was not a
statistically valid form for the parametrisation, the value

obtained
for chi-squared would tell you so.


A chi-squared value can be obtained for any straight line drawn

through
otherwise random (or even non-random) data. Now, one can pick the

'best' of
the infinite number of fits. But this was not done. The assumption

of the
Big Bang was used to determine the line (a Bayesian prior), then

selected
data was used (with discordant data thrown out).


I was not commenting on a specific case of fitting parameters to a
specific set of data.
I was merely pointing out that you were burbling when you said

"This is theoretically true, but only in Bayesian statistics."

At least Joseph was trying to address the science issues. You are

simply
frothing at the mouth (as usual).


If telling you that you are bull****ting is frothing at the mouth,
then so be it.

Franz



  #7  
Old June 24th 04, 03:48 PM
greywolf42
external usenet poster
 
Posts: n/a
Default French's Primordial Study and Schramm & Turner, 1997

Franz Heymann wrote in message
...

"greywolf42" wrote in message
...
Franz Heymann wrote in message
...


{snip higher levels}

Lazio's statement is correct in the case of ordinary old
fashioned least squares analysis.


How did you determine that the relationship was linear, Franz? This
isn't a case of simply finding the mean of several measurements of a
single value. (Which I believe Joseph understood, even though he
used the improper term 'mean of the data.')


No response on the physics, I see.

{snip higher levels}

If the errors for the individual measurements are known, (as they
are in the case under discussion) a correctly applied least squares
fit to the data will yield no only the values of the parameters, the
uncertainties associated with the errors, but also an estimator as
to the significance of the expression used to parametrise the data.


How did you determine that the relationship was linear, Franz?


Very simple. Fit a hypothesis that the relaionship contains a square
term as well and study the values of the fitted parameters and their
associated errors. If your square term is insignificant, its
magnitude will be swamped by its error.


This is experimental science, Franz. There is no room for a hypothesis,
here.

{snip higher levels}

A good experimenter will determine the
chi-squared parameter of the fit. If the straight line was not a
statistically valid form for the parametrisation, the value
obtained for chi-squared would tell you so.


A chi-squared value can be obtained for any straight line drawn
through otherwise random (or even non-random) data.


I see you don't respond to the physics.

Now, one
can pick the 'best' of the infinite number of fits. But this was not

done.
The assumption of the Big Bang was used to determine the line
(a Bayesian prior), then selected data was used (with discordant data
thrown out).


I was not commenting on a specific case of fitting parameters to a
specific set of data.


Then your whole line of discussion is irrelevant.

I was merely pointing out that you were burbling when you said

"This is theoretically true, but only in Bayesian statistics."


And Franz is reduced to mere repetition and ad hominem.

At least Joseph was trying to address the science issues. You are
simply frothing at the mouth (as usual).


If telling you that you are bull****ting is frothing at the mouth,
then so be it.


Bye in this thread, troll.

--
greywolf42
ubi dubium ibi libertas
{remove planet for return e-mail}


  #8  
Old June 25th 04, 10:30 AM
Bjoern Feuerbacher
external usenet poster
 
Posts: n/a
Default French's Primordial Study and Schramm & Turner, 1997

greywolf42 wrote:
Joseph Lazio wrote in message
...


[snip most]


Now suppose that the Big Bang is wrong. Then what's Yp?



If the big bang is 'wrong', then Yp (a theoretical parameter of the BB) does
not exist.


Pardon??????????????

Yp is the abundance of helium in the universe. Why on earth would that
parameter not exist if the BB is wrong?????


Couldn't it
be anywhere in the range 0 Yp 1? Doesn't it seem just a bit weird
that, if the Big Bang model is wrong, that the Yp value just happens
to be about what the BB predicts?



It's not weird at all. The reported values of Yp have been modified over
and over, by throwing out data and ignoring prior work -- until the BB
theorists are happy.


Even if we include your claimed value of 0.11, it is still a fact
that the value predicted by the BBT lies in the range of the observed
data (0.11 Yp 0.25). Just coincidence?


However, the real universe remains out there. With several known galaxies
well below BB Yp values.


Even if this is right - is it just coincidence that the theoretically
predicted value lies in the observed range?


Any one of which explicitly disproves the BB. Because the 'real'
Yp *cannot* be any higher than the *lowest* measured He/H
ratio. And *all* of the He/H measurements are upper bounds.


Did it ever occur to you that there can be something called "systematic
errors" in measurements?

I don't claim that this is necessarily the case here - I only want to
mention that perhaps one should consider this possibility, too...


Bye,
Bjoern
  #9  
Old June 25th 04, 08:06 PM
greywolf42
external usenet poster
 
Posts: n/a
Default French's Primordial Study and Schramm & Turner, 1997

Bjoern Feuerbacher wrote in message
...
greywolf42 wrote:
Joseph Lazio wrote in message
...


[snip most]


As soon as I sign off from Troll Franz in the thread, his tag-team buddy
Bjoern chimes in.

Now suppose that the Big Bang is wrong. Then what's Yp?


If the big bang is 'wrong', then Yp (a theoretical parameter of the BB)
does not exist.


Pardon??????????????

Yp is the abundance of helium in the universe. Why on earth would that
parameter not exist if the BB is wrong?????


Yp is the *PRIMORDIAL* concentration (ratio) of He (to H) in the Big Bang
theory. That's what the 'p' stands for!

It is calculated *for* the big bang theory. A steady state theory does not
necessarily have an equivalent to Yp. (Though it probably has a 'Y'.)

Couldn't it
be anywhere in the range 0 Yp 1? Doesn't it seem just a bit weird
that, if the Big Bang model is wrong, that the Yp value just happens
to be about what the BB predicts?


It's not weird at all. The reported values of Yp have been modified
over and over, by throwing out data and ignoring prior work -- until
the BB theorists are happy.


Even if we include your claimed value of 0.11, it is still a fact
that the value predicted by the BBT lies in the range of the observed
data (0.11 Yp 0.25).


The proof by assertion. Unfortunately, your assertion is countered in all
those nice papers (referenced in this thread) that you seem not to have
read. At least Joseph made an effort to read some of the papers.

Just coincidence?


I see no coincidence. I see you simply asserting that what you would like
to be true, is true.

However, the real universe remains out there. With several known
galaxies well below BB Yp values.


Even if this is right - is it just coincidence that the theoretically
predicted value lies in the observed range?


The BBT predictions *don't* lie within that range. According to the
references, above. (Hint: You get the 'wrong' answers for isotopic
abundances.) See the original rant by Ned Wright. IIRC, the BBT requires
about .21 to .24.

Any one of which explicitly disproves the BB. Because the 'real'
Yp *cannot* be any higher than the *lowest* measured He/H
ratio. And *all* of the He/H measurements are upper bounds.


Did it ever occur to you that there can be something called "systematic
errors" in measurements?


Why yes, (greywolf said sweetly). The systematic error identified in the
above studies is the fact that those He values are UPPER LIMITS -- not
actual values (according to French and French's references). Scratch one BB
theory.

I don't claim that this is necessarily the case here - I only want to
mention that perhaps one should consider this possibility, too...


Horsefeathers. You are simply trolling in blind ignorance. You don't know
the theory (the meaning of Yp) and you haven't read any of the papers. Yet
you post just to argue.

Bye to you, too, troll.

--
greywolf42
ubi dubium ibi libertas
{remove planet for return e-mail}


  #10  
Old June 25th 04, 08:17 PM
Franz Heymann
external usenet poster
 
Posts: n/a
Default French's Primordial Study and Schramm & Turner, 1997


"greywolf42" wrote in message
...
Franz Heymann wrote in message
...

"greywolf42" wrote in message
...
Franz Heymann wrote in

message
...


{snip higher levels}

Lazio's statement is correct in the case of ordinary old
fashioned least squares analysis.

How did you determine that the relationship was linear, Franz?

This
isn't a case of simply finding the mean of several measurements

of a
single value. (Which I believe Joseph understood, even though he
used the improper term 'mean of the data.')


No response on the physics, I see.

{snip higher levels}

If the errors for the individual measurements are known, (as

they
are in the case under discussion) a correctly applied least

squares
fit to the data will yield no only the values of the

parameters, the
uncertainties associated with the errors, but also an

estimator as
to the significance of the expression used to parametrise the

data.

How did you determine that the relationship was linear, Franz?


Very simple. Fit a hypothesis that the relaionship contains a

square
term as well and study the values of the fitted parameters and

their
associated errors. If your square term is insignificant, its
magnitude will be swamped by its error.


This is experimental science, Franz. There is no room for a

hypothesis,
here.


The hypothesis being tested would be
"The data is a sample from a set which is correctly parametrised by y
= a0 + a1*x + a2*x^2"

Since you have now lost the ball completely, I propose to let you go
and have a rest.

Franz


 




Thread Tools
Display Modes

Posting Rules
You may not post new threads
You may not post replies
You may not post attachments
You may not edit your posts

vB code is On
Smilies are On
[IMG] code is On
HTML code is Off
Forum Jump

Similar Threads
Thread Thread Starter Forum Replies Last Post
French's Primordial Study greywolf42 Astronomy Misc 8 September 16th 03 07:53 PM


All times are GMT +1. The time now is 02:49 PM.


Powered by vBulletin® Version 3.6.4
Copyright ©2000 - 2024, Jelsoft Enterprises Ltd.
Copyright ©2004-2024 SpaceBanter.com.
The comments are property of their posters.