A Space & astronomy forum. SpaceBanter.com

Go Back   Home » SpaceBanter.com forum » Astronomy and Astrophysics » Research
Site Map Home Authors List Search Today's Posts Mark Forums Read Web Partners

Good News for Big Bang theory



 
 
Thread Tools Display Modes
  #261  
Old January 18th 07, 09:02 AM posted to sci.astro.research
Chalky
external usenet poster
 
Posts: 219
Default Good News for Big Bang theory

Steve Willner wrote:

[galaxy peculiar velocities]

Oh No wrote:
This is the figure given by Riess. 300km/s is given by
Astier. Note that these are 1 sigma figures,


That looks like the random component for field and loose groups. What
about velocity dispersions in rich clusters and the systematic flows?

Worse yet, the velocities may be
correlated with direction in the sky; think "Great Attractor."


If there is any such speculative thing, its existence should have had
some effect in the tests I ran. Those tests were entirely consistent
with isotropy and homogeneity.


Large structures are not speculative at all, and of course the data you
have show no such effect. The SNe are concentrated to very limited
regions of sky. (Have you even looked at the positions?) Global flows
will not show up in the data, but they will still bias the results in
any one direction. There's a reason Ned is using velocity
uncertainties of 1500 km/s.

Ned's
1500 km/s may be a bit on the high side, but it isn't ridiculous.


Bearing in mind that 400km/s is already included, it is actually a
figure of 1900 km/s.


No, the dispersions add in quadrature. Ned uses 1500; you apparently
use 300. I suspect that's why your chi-square values are so
(unbelievably) low.

In your latest post (subject "Supernova results from ESSENCE") you
write:
One might say the teleconnection wins in a dead heat, with Chalky very
close behind.


I hope you would not say that. The results are a statistical draw;
they say nothing about which model is preferred.


I readily admit that my own knowledge of statistics is limited to what
I was taught about this subject as part of a Bachelor's (now Master's)
physics degree course, and on my reading of the additional books I was
then told to buy.

I thus have to rely on additional info. from those with greater
expertise than me in this area, when the discussion goes beyond that
stastistics background.
I thus feel that I now have to ask the following further questions:

1) Ned's binning of data at maximum observed dimming, and either side
of this, provides the following empirical observational data:

sample size z range mean d(DM) standard deviation
(sigma)
31 0.369 - 0.460 0.1665 0.0406
31 0.461 - 0.562 0.2700 0.0375
29 0.526 - 0.620 0.1521 0.0395

Now, I think we can all agree that the mid range figure of 0.27 does
seem a bit high. However, I think I would agree with the implication of
one of Steve Willner's comments, that we could thus think in terms of
even greater "great attractors" as opposed to Charles's alternative
approach of simply throwing out all data that does not fit in with
prior theoretical preconceptions.

This leads me to conclude that we might obtain still more statistically
meaningful and accurate data over this entire range, by deriving the
total mean binned value for the entire range 0.369 z 0.620.

This makes sense theoretically, since Charles has already agreed that
all models give a pretty flat curve over this range.

Now, the simplest way to do this would be to just take the average of
0.1665, 0.2700, and 0.1521, over the entire range to give a mean of
0.1962.

However, I suspect it might be a bit more statistically rigorous to
weight each of these 3 contributors, in terms of their 'confidence'
(i.e. inversely in proportion to their standard deviations), to obtain
a more accurate mean. Doing this I get an overall mean of 0.19777

Finally, it would be helpful to know what the standard deviation of
this combined set is. Presumably, since there are now about 3 times the
number of samples, the standard deviation will be smaller. However,
this takes me to the limit of my prior knowledge of statistics (or, at
least, my memory of same), and so further comments would be
appreciated.

Chalky.
  #262  
Old January 20th 07, 09:04 AM posted to sci.astro.research
John (Liberty) Bell
external usenet poster
 
Posts: 242
Default Good News for Big Bang theory

Oh No wrote:

I have been using flat space standard models with Omega_lambda=1-Omega
for standard fits,


OK, that is what Chalky employed for his initial tabulation, using the
default values an Ned's cosmic calculator. However, we have had to
retabulate all that because, as Ned correctly pointed out, the Riess et
al data is based on an assumed Ho of 63.8, not 71.

What value have you been using for Ho?

We have also now done the corresponding tabulation for the closed dark
energy model which Ned considered. This does actually give a better fit
than FLAT EFE, but still not as good as Chalky's Law, when everything
is taken into consideration.

and teleconnection models with Omega_lambda=0.


Interesting. So, if I remember correctly, you find in your model that
Omega_M is about 2, for best fit.

You have thus replaced a hypothetical 'extra' dark energy
contribution of 0.73 in EFE, with a hypothetical 'extra' dark matter
contribution of 1.73 in EFE.

The difference between your curve round here and the other two is 0.034
magnitudes.


The exact figures for d (DM) max (according to latest tabulation) a

Chalky's Law Best FLAT EFE Best CLOSED EFE
0.208 0.147 0.192

The mean of Ned's 3 z bins near maximum dimming is 0.198

As I have already posted, the bin in this area shows that the data is a
freak, even using Chalky's law.


The bin at predicted maximum dimming gives a d (DM) max figure of 0.270
which certainly is 'freak'

However, the 2 bins on either side of this are 'freak' in the opposite
direction, giving a perfectly respectable mean value of 0.198, over the
whole 91 type 1a Snae in this z range.

The whole test is meaningless if you
cannot rely on accurately typed and collated data.


The test only becomes meaningless if you start throwing out the
statistical data selectively (as you have done).


John
  #263  
Old January 20th 07, 08:20 PM posted to sci.astro.research
Oh No
external usenet poster
 
Posts: 433
Default Good News for Big Bang theory

Thus spake "John (Liberty) Bell"
Oh No wrote:

I have been using flat space standard models with Omega_lambda=1-Omega
for standard fits,


OK, that is what Chalky employed for his initial tabulation, using the
default values an Ned's cosmic calculator. However, we have had to
retabulate all that because, as Ned correctly pointed out, the Riess et
al data is based on an assumed Ho of 63.8, not 71.

What value have you been using for Ho?


It actually makes no difference to these exercises, because H0 is
effectively floating. Any difference will be absorbed into the fitting
parameter, dM, but Riess says he used H0=64.5, so I don't think Ned was
correct about that.

We have also now done the corresponding tabulation for the closed dark
energy model which Ned considered. This does actually give a better fit
than FLAT EFE, but still not as good as Chalky's Law, when everything
is taken into consideration.

and teleconnection models with Omega_lambda=0.


Interesting. So, if I remember correctly, you find in your model that
Omega_M is about 2, for best fit.

You have thus replaced a hypothetical 'extra' dark energy
contribution of 0.73 in EFE, with a hypothetical 'extra' dark matter
contribution of 1.73 in EFE.


No. On several counts. First dark energy refers to Lambda, and is quite
distinct from dark matter. Second, critical density is 1/4 of that of
the standard model, so a figure of Omega=2 corresponds to an actual
density of 0mega=0.5 in the standard model, so the extra dark matter is
only 0.2. Finally we know there is a lot of dark matter, and being dark
we cannot measure how much. That is not a problem. Cold dark matter is a
problem, but there is no immediate reason why any dark matter should be
cold in this model.

The bin at predicted maximum dimming gives a d (DM) max figure of 0.270
which certainly is 'freak'

However, the 2 bins on either side of this are 'freak' in the opposite
direction, giving a perfectly respectable mean value of 0.198, over the
whole 91 type 1a Snae in this z range.


Then you should start mistrusting your interpretation of the data.

The whole test is meaningless if you
cannot rely on accurately typed and collated data.


The test only becomes meaningless if you start throwing out the
statistical data selectively (as you have done).


You cannot carry out a scientific investigation in that manner. Your
insinuations are unfounded.



Regards

--
Charles Francis
substitute charles for NotI to email
  #264  
Old January 21st 07, 09:47 AM posted to sci.astro.research
John (Liberty) Bell
external usenet poster
 
Posts: 242
Default Good News for Big Bang theory

Oh No wrote:

Riess says he used H0=64.5, so I don't think Ned was
correct about that.


Let me guess. You are quoting from the 2004 paper, not the 2007 paper.

Thus spake "John (Liberty) Bell"
Oh No wrote:


The whole test is meaningless if you
cannot rely on accurately typed and collated data.


The test only becomes meaningless if you start throwing out the
statistical data selectively (as you have done).


You cannot carry out a scientific investigation in that manner. Your
insinuations are unfounded.


You can't have it both ways. Either your removals were selective or
they were random. If they were random, then Chalky's noise analysis is
correct. So is the obvious conclusion from the resultant noise figures.
Apart from wasting everybody's time, all that you have achieved with
your random removals, is a substantial increase in the random noise
exhibited by the set.

I am done here now too.

John (Liberty) Bell
http://global.accelerators.co.uk
(Change John to Liberty to respond by email)
  #265  
Old January 21st 07, 02:30 PM posted to sci.astro.research
Oh No
external usenet poster
 
Posts: 433
Default Good News for Big Bang theory

Thus spake "John (Liberty) Bell"
Oh No wrote:

Riess says he used H0=64.5, so I don't think Ned was
correct about that.


Let me guess. You are quoting from the 2004 paper, not the 2007 paper.


I am quoting from e-mail.

Thus spake "John (Liberty) Bell"
Oh No wrote:


The whole test is meaningless if you
cannot rely on accurately typed and collated data.

The test only becomes meaningless if you start throwing out the
statistical data selectively (as you have done).


You cannot carry out a scientific investigation in that manner. Your
insinuations are unfounded.


You can't have it both ways. Either your removals were selective or
they were random.


They were removed because they could be shown not to be a part of the
main population, in the main because they did not satisfy criteria for
positive type 1A. It was also shown by noise analysis that certain data
sets cannot be combined, since they are not part of compatible
populations. That is not removal of data.

If they were random, then Chalky's noise analysis is
correct.


That is clearly wrong.

So is the obvious conclusion from the resultant noise figures.
Apart from wasting everybody's time, all that you have achieved with
your random removals, is a substantial increase in the random noise
exhibited by the set.


That is clearly wrong too.


Regards

--
Charles Francis
substitute charles for NotI to email
  #266  
Old January 31st 07, 05:18 PM posted to sci.astro.research
Chalky
external usenet poster
 
Posts: 219
Default Good News for Big Bang theory

On Jan 20, 8:20 pm, Oh No wrote:
Thus spake "John (Liberty) Bell"

Oh No wrote:


I have been using flat space standard models with Omega_lambda=1-Omega
for standard fits,


OK, that is what Chalky employed for his initial tabulation, using the
default values an Ned's cosmic calculator. However, we have had to
retabulate all that because, as Ned correctly pointed out, the Riess et
al data is based on an assumed Ho of 63.8, not 71.


What value have you been using for Ho?


It actually makes no difference to these exercises, because H0 is
effectively floating.


This is incorrect. Ho cancels out for the observed data tabulation,
but does have an influence for the theoretical curves.
Ned explains this, in terms of changing optimised cosmological
parameters, for models based on EFE, at http://www.astro.ucla.edu/~wright/sne_cosmology.html.
Similarly, It makes a difference in calculating the distance modulus
differences between Chalky's Law and the Milne (inertial) model.

Incidentally, you may want to look at the Wright refererence again for
another reason. Ned has now included Gamma Ray Burst data which
extends the z range by a factor of ~ 4.

Chalky.
  #267  
Old January 31st 07, 06:18 PM posted to sci.astro.research
Oh No
external usenet poster
 
Posts: 433
Default Good News for Big Bang theory

Thus spake Chalky
On Jan 20, 8:20 pm, Oh No wrote:
Thus spake "John (Liberty) Bell"

Oh No wrote:


I have been using flat space standard models with Omega_lambda=1-Omega
for standard fits,


OK, that is what Chalky employed for his initial tabulation, using the
default values an Ned's cosmic calculator. However, we have had to
retabulate all that because, as Ned correctly pointed out, the Riess et
al data is based on an assumed Ho of 63.8, not 71.


What value have you been using for Ho?


It actually makes no difference to these exercises, because H0 is
effectively floating.


This is incorrect. Ho cancels out for the observed data tabulation,
but does have an influence for the theoretical curves.
Ned explains this, in terms of changing optimised cosmological
parameters, for models based on EFE, at http://www.astro.ucla.edu/~wrig
ht/sne_cosmology.html.


It is not incorrect, and you would do well to understand Ned's
explanation before you make such bald statements.

Incidentally, you may want to look at the Wright refererence again for
another reason. Ned has now included Gamma Ray Burst data which
extends the z range by a factor of ~ 4.


I have already seen the gamma ray burst papers. They are based on an
extremely suspect statistical analysis imv and I would caution anyone
against using them.



Regards

--
Charles Francis
substitute charles for NotI to email
  #268  
Old February 1st 07, 08:34 AM posted to sci.astro.research
Chalky
external usenet poster
 
Posts: 219
Default Good News for Big Bang theory

On Jan 31, 6:18 pm, Oh No wrote:
Thus spake Chalky

On Jan 20, 8:20 pm, Oh No wrote:
Thus spake "John (Liberty) Bell"


Oh No wrote:


I have been using flat space standard models with Omega_lambda=1-Omega
for standard fits,


OK, that is what Chalky employed for his initial tabulation, using the
default values an Ned's cosmic calculator. However, we have had to
retabulate all that because, as Ned correctly pointed out, the Riess et
al data is based on an assumed Ho of 63.8, not 71.


What value have you been using for Ho?


It actually makes no difference to these exercises, because H0 is
effectively floating.


This is incorrect. Ho cancels out for the observed data tabulation,
but does have an influence for the theoretical curves.
Ned explains this, in terms of changing optimised cosmological
parameters, for models based on EFE, athttp://www.astro.ucla.edu/~wrig
ht/sne_cosmology.html.


It is not incorrect, and you would do well to understand Ned's
explanation before you make such bald statements.


Since your response does not explain why you claim this, I guess we
will have to chalk this up to another example of dogmatism, as opposed
to scientific enlightenment.

Incidentally, you may want to look at the Wright refererence again for
another reason. Ned has now included Gamma Ray Burst data which
extends the z range by a factor of ~ 4.


I have already seen the gamma ray burst papers. They are based on an
extremely suspect statistical analysis imv and I would caution anyone
against using them.


Again, you have not explained why the statistical analysis is suspect,
which suggests again that you are using this forum as a medium for
pontification, not as a medium for genuine scientific discussion.

Now, if you had said something a bit more more scientifically
perceptive, say along the following lines, I might have considered
your comment more worthy of further objective consideration, and
discussion:

GRBs are highly directional (as were the transmissions from the
Pioneers). Consequently, if the observer is not precisely in line with
the direction of the burst, the observed brightness will be less than
if the observer was precisely in line.

However, you didn't say this.


Chalky
  #269  
Old February 1st 07, 10:06 AM posted to sci.astro.research
Oh No
external usenet poster
 
Posts: 433
Default Good News for Big Bang theory

Thus spake Chalky
On Jan 31, 6:18 pm, Oh No wrote:
Thus spake Chalky


This is incorrect. Ho cancels out for the observed data tabulation,
but does have an influence for the theoretical curves.
Ned explains this, in terms of changing optimised cosmological
parameters, for models based on EFE, athttp://www.astro.ucla.edu/~wrig
ht/sne_cosmology.html.


It is not incorrect, and you would do well to understand Ned's
explanation before you make such bald statements.


Since your response does not explain why you claim this, I guess we
will have to chalk this up to another example of dogmatism, as opposed
to scientific enlightenment.


I have previously explained it, but it is becoming clear there is little
point in giving you explanations. It is, in any case, common knowledge,
and you can easily find mention of it in for example the papers of Riess
or Astier which have been referenced in this thread, and by studying
what Ned Wright actually says, instead of yourself making dogmatic
statements about things which you don't understand.

Incidentally, you may want to look at the Wright refererence again for
another reason. Ned has now included Gamma Ray Burst data which
extends the z range by a factor of ~ 4.


I have already seen the gamma ray burst papers. They are based on an
extremely suspect statistical analysis imv and I would caution anyone
against using them.


Again, you have not explained why the statistical analysis is suspect,
which suggests again that you are using this forum as a medium for
pontification, not as a medium for genuine scientific discussion.


I read the papers more than six months ago, and am not about to spend
several hours rereading them to give a precise critique, or attempt to
explain the unorthodox analysis on which the gamma ray figures are based
- that is itself contained in three papers which seem to use a circular
argument, adjusting redshifts to fit the law they are supposed to prove.
If you didn't like my very conventional analysis, you certainly wouldn't
be able to approve of this one.

Now, if you had said something a bit more more scientifically
perceptive, say along the following lines, I might have considered
your comment more worthy of further objective consideration, and
discussion:

GRBs are highly directional (as were the transmissions from the
Pioneers). Consequently, if the observer is not precisely in line with
the direction of the burst, the observed brightness will be less than
if the observer was precisely in line.

GRB's are not well understood.



Regards

--
Charles Francis
substitute charles for NotI to email
  #270  
Old February 1st 07, 10:06 AM posted to sci.astro.research
Chalky
external usenet poster
 
Posts: 219
Default Good News for Big Bang theory

On Jan 31, 5:18 pm, "Chalky" wrote:
On Jan 20, 8:20 pm, Oh No wrote:





Thus spake "John (Liberty) Bell"


Oh No wrote:


I have been using flat space standard models with Omega_lambda=1-Omega
for standard fits,


OK, that is what Chalky employed for his initial tabulation, using the
default values an Ned's cosmic calculator. However, we have had to
retabulate all that because, as Ned correctly pointed out, the Riess et
al data is based on an assumed Ho of 63.8, not 71.


What value have you been using for Ho?


It actually makes no difference to these exercises, because H0 is
effectively floating.


This is incorrect. Ho cancels out for the observed data tabulation,
but does have an influence for the theoretical curves.
Ned explains this, in terms of changing optimised cosmological
parameters, for models based on EFE, at http://www.astro.ucla.edu/~wright/sne_cosmology.html.
Similarly, It makes a difference in calculating the distance modulus
differences between Chalky's Law and the Milne (inertial) model.


Actually, this latter difference is only one or two bits in the 4th
significant figure, which seems to be the limit of accuracy for Ned's
calculator. Incidentally, this digital quantisation noise in the
(javascript) calculator was also the reason why my initial tabulation
appeared to indicate negative values at ultra-low z.

Such errors have since been corrected in the subsequent tabulations,
which have not yet been placed on the internet.

Chalky
 




Thread Tools
Display Modes

Posting Rules
You may not post new threads
You may not post replies
You may not post attachments
You may not edit your posts

vB code is On
Smilies are On
[IMG] code is On
HTML code is Off
Forum Jump

Similar Threads
Thread Thread Starter Forum Replies Last Post
ED CONRAD KNOCKS 'EM DEAD ON LARRY KING LIVE Ed Conrad Astronomy Misc 0 June 13th 06 01:27 AM
PROFESSIONAL SKEPTICS OF BILLY MEIER, EXTRATERRESTRIALS EATING CROW [email protected] Astronomy Misc 0 May 11th 06 08:55 PM
Even More on BILLY MEIER & EXTRATERRESTRIALS -- Major Media Conspiracy Against Truth ---- Just like 911 Gov't Hoax & Man as Old as Coal ---- Ed Conrad Misc 0 May 10th 06 11:04 PM
ED CONRAD WILL WIN IN THE LONG RUN -- 1996 Prediction Coming True -- Evolution Going Belly Up -- Man as Old as Coal Ed Conrad Astronomy Misc 0 May 10th 06 01:31 PM
Off to Early Start in Worldwide Burning of EVOLUTION Textbooks Ed Conrad Astronomy Misc 0 April 29th 06 09:08 PM


All times are GMT +1. The time now is 12:30 PM.


Powered by vBulletin® Version 3.6.4
Copyright ©2000 - 2024, Jelsoft Enterprises Ltd.
Copyright ©2004-2024 SpaceBanter.com.
The comments are property of their posters.