A Space & astronomy forum. SpaceBanter.com

Go Back   Home » SpaceBanter.com forum » Astronomy and Astrophysics » Astronomy Misc
Site Map Home Authors List Search Today's Posts Mark Forums Read Web Partners

Cosmic acceleration rediscovered



 
 
Thread Tools Display Modes
  #81  
Old December 22nd 04, 02:32 PM
George Dishman
external usenet poster
 
Posts: n/a
Default


"greywolf42" wrote in message
.. .
George Dishman wrote in message
...

"Bjoern Feuerbacher" wrote in message
...
George Dishman wrote:
"greywolf42" wrote in message
.. .

... not counting the ad hoc modification for cosmic epochs.

greywolf conveniently ignores that that is not an "ad hoc
modification", but was in the *theoretical description* of
the redshift-distance relation right from the start.


I can only guess but I suspect he wasn't aware of the
formal definition initially


The "formal definition" of the Hubble constant was made by Eddington, in
the
1930s. It didn't have an H_0, IIRC.


Citation please. I am aware he showed the low z limit was
proportional from observation but I haven't seen his formal
statement of the law for other than the low-z case.

and now pretends that he
thinks it a later addition to avoid admitting his
argument is baseless.


To which "definition" were you referring? Citation please. Hubble
certainly didn't define it. To quote Lubin and Sandage (2001): "Hubble's
reticence, even as late as 1953, to accept the expansion as real is
explained as due to his use of equations for distances and absolute
magnitudes of redshifted galaxies that do not conform to the modern Mattig
equations of the standard model."

In short, the definition has changed since 1953.


In short you have a comrehension problem. The quote clearly
states that "even as late as 1953" he was not using "the
modern Mattig equations" hence if there was ever a change,
it was before 1953.

However, the Mattig equations relate distance to observables
which is not the Hubble Law so your quote is relevant anyway.


snip exponential equations - see my previous reply


Well, the only curves I have looked at are the redshift-
magnitude curves; so I can't rule out that the curve relating
distance and red shift is indeed exponential.


???? The citation was given earlier in the thread, back on November 10th.
You can look at the data yourself.

=================
For a quick reference, see Perlmutter, Figure 3, Physics Today, April
2003,
"Supernovae, Dark Energy, and the Accelerating Universe".
http://www.slac.stanford.edu/econf/C...perlmutter.pdf

Just notice that instead of "accelerating universe" and "decelerating
universe" (which require a linear assumption), one should read:
"exponential
redshift-distance relation" and "inverse exponential redshift-distance
relation," respectively. ...


small snip - see below

But I have never
heard that (only from you two, especially from greywolf), and
such a result would surprise me, in light of the theory I know.


The object of science is to compare theory with observation. Yet, Bjoern
is
well aware that the redshift-distance curve deviates away from the old BB
predictions. ....


Bjoern is well aware that the redshift-magnitude curve deviates
from the simpler prediction but that isn't a redshift-distance
curve. The missing link is how tired light predicts the magnitude
will vary with distance.

snip from above

... Pure Hubble constant (linear assumption) lies on
the straight line.



However, you have since agreed that this claim of
linearity was wrong:

"greywolf42" wrote in message
.. .
George Dishman wrote in message
...

....
What is described as "dark energy" is the unknown
cause of the _deviation_ from the predicted _non-linearity_
which, in the conventional model,


Yes, we agree.





Magnitude is more difficult. It would presumably
follow from simple inverse square loss


Tired light falls off below the inverse square loss.


What causes the additional reduction and what is
the formula for it? Without that, you cannot claim
that tired light predicts the Perlmutter results
since the distance is inferred from the magnitude.

but that would fail the Tolman test.


Not surprising, since the TT is purely theoretical, and back-calculated
from
BB theory.


It deviates from the inverse square law due to time
dilation and the headlight effect, both of which are
predicted by relativity, not derived from cosmology.

snip background
The hand-wave solution is:
"The solution is that we also have an observed quantity for which we need
assume nothing about the cosmology in order to determine it. This is the
observed surface brightness, obtained from equation (4) using only the
observed angular radius and the observed apparent magnitude. This observed
quantity contains almost all of the Tolman signal with only a slight
dependence on M and linear R.


Nothing hand-waving about it. You do the test on an
observable value in which the cosmological factors
cancel out.

As others have said:
"The problem with performing the test is however that we have to use
galaxies at large distances, which are affected by evolutionary effects
for
which we have limited information." Can anyone say "adjustible
parameters?"


Perlmutter's papers on SNe also place upper limits on
the effect of evolution.

Some other mechanism
is needed for a Tired Light theory that corrects
the magnitude, such as intergalactic dust.


Extinction is usually an adjustible parameter. The amount of extinction
is
a very difficult problem, and can be made to fit almost any desired
theory.


It is hard to get away from an exponential form that
depends only on the density of the absorbing material.
However, you claim tired light predicts the SNe data
so what value and equation for extinction were used
in that prediction?

However, to get enough absorption, the mean
density can reach high values depending on the
substance proposed. The subject can get very
complex so I have generally avoided discussing it
as without detailed proposals on the material and
considerable analysis I doubt we could reach any
conclusions. It's not really necessary anyway as
the macroscopic properties can be used to
disprove the versions I've seen.


Well, earlier, you claimed that any version with "exponential energy
removal" were disproved.


Still repeating the same old fiction I see.

snip
Aside from punting significant aspects of history; L&S once again trot out
an old myth (popularized by Misner, Thorne, and Wheeler; but never
quantified), that "As critics still point out, any scattering process with
energy transfer from the photon beam to the scattering medium, as required
for a redshift, must broaden (deflect) the beam. This effect would cause
images of distant galaxies to be fuzzier than their local counterparts,
which they are not." The claim of "fuzziness" requires photons to scatter
off of interstellar matter like little bowling balls.


Yes, that's why they said "any SCATTERING process ...".

It is not based on a
single tired light assumption ... anywhere.


It applies to those that used scattering to redden the
light.

George


  #82  
Old December 22nd 04, 05:42 PM
George Dishman
external usenet poster
 
Posts: n/a
Default

[This may not thread correctly, my server rejected
a reply as the references line was too long.]

"Bjoern Feuerbacher" wrote in message
...

Then why did you never ask him why he keeps talking about
a "detected exponential curve"?


I have had many conversations with people putting
up unconventional theories. Usually I will point
out some flaw where observation rules out what they
propose and there will be some discussion of that
until they start to realise they are having
difficulty finding a flaw in my argument. Once that
happens, in my experience, they start trying to
change the subject. The smarter ones will often
drop in a throw-away line that looks innocuous but
if you pick up on it, they make it the main topic
and quietly snip any discussion of the data that
falsifies their theory.

His comment that the exponential is observed strikes
me as such an attempt to deflect the conversation so
I did not intend to take the bait.

Since you asked, I have brought up the point in my
latest post so maybe he will address it, but I really
want to stick to seeing whether he can identify any
cosmological model based on tired light that can
explain the frequency spectrum of the CMBR.

I really wonder if there are some severe reading comprehension
problems on his side, or if he does do that fully consciously,
for trolling...


I have found frequently with cranks and trolls that
they have so convinced themselves of their case that
they will read web pages, books and posts to mean
what they expect you to say without making much
attempt to actually understand the text. His reading
of the Ned Wright graph I cited is a case in point.
He assumed it was talking of a distant source and
therefore not relevant when, if he had looked and
considered carefully, he should have realised it was
talking of a local source.

Now that might be just carelessness or it might be
a deliberate ploy to try to discredit the argument,
but for the real cranks I know it is a self-imposed
blindness, they cannot allow such an idea to form
in their minds unless they already have a way to
rationalise it away. It is fscinating to take one
through all the steps needed to disprove their theory
without giving the game away and get them to agree
each step, then put them together at the end and show
how the combination rules out their idea. Suddenly
things that were obvious and agreed become
unacceptable as their minds rebel against the logic.

I don't think greywolf is that sort of person at all,
but it will be interesting to see what objections he
raises to my points.

best regards
George





  #83  
Old December 22nd 04, 06:33 PM
Bjoern Feuerbacher
external usenet poster
 
Posts: n/a
Default

George Dishman wrote:
"greywolf42" wrote in message
.. .

George Dishman wrote in message
...

"Bjoern Feuerbacher" wrote in message
...


[snip]


Well, the only curves I have looked at are the redshift-
magnitude curves; so I can't rule out that the curve relating
distance and red shift is indeed exponential.


???? The citation was given earlier in the thread, back on November 10th.
You can look at the data yourself.

=================
For a quick reference, see Perlmutter, Figure 3, Physics Today, April
2003,
"Supernovae, Dark Energy, and the Accelerating Universe".
http://www.slac.stanford.edu/econf/C...perlmutter.pdf

Just notice that instead of "accelerating universe" and "decelerating
universe" (which require a linear assumption), one should read:
"exponential
redshift-distance relation" and "inverse exponential redshift-distance
relation," respectively. ...


Figure 3 is a picture of SN 1998ba. I can only guess that greywolf
means figure 1. But even in that figure, I fail to see an
exponential relationship between redshift and distance.

So I *still* wonder where he gets the claim from that the
detected curve is "exponential".


[snip]


... Pure Hubble constant (linear assumption) lies on
the straight line.


Straight line in which figure? Fig. 1 is *not* showing the
relationship between redshift and distance!

BTW, the term "pure Hubble constant" makes little sense. In *no* version
of the BBT *ever*, the Hubble parameter was assumed to be constant *in
time*!



Aside from punting significant aspects of history; L&S once again trot out
an old myth (popularized by Misner, Thorne, and Wheeler; but never
quantified), that "As critics still point out, any scattering process with
energy transfer from the photon beam to the scattering medium, as required
for a redshift, must broaden (deflect) the beam. This effect would cause
images of distant galaxies to be fuzzier than their local counterparts,
which they are not." The claim of "fuzziness" requires photons to scatter
off of interstellar matter like little bowling balls.


The last claim is simply wrong. Nowhere in the calculation are
the photons treated as "little bowling balls". One uses the known
relations for their energies and momenta (E = pc, following from
Maxwell's equations, and de Broglie's formulas).


[snip]

Bye,
Bjoern
  #84  
Old December 22nd 04, 06:37 PM
Bjoern Feuerbacher
external usenet poster
 
Posts: n/a
Default

George Dishman wrote:
[This may not thread correctly, my server rejected
a reply as the references line was too long.]

"Bjoern Feuerbacher" wrote in message
...

Then why did you never ask him why he keeps talking about
a "detected exponential curve"?



I have had many conversations with people putting
up unconventional theories. Usually I will point
out some flaw where observation rules out what they
propose and there will be some discussion of that
until they start to realise they are having
difficulty finding a flaw in my argument. Once that
happens, in my experience, they start trying to
change the subject. The smarter ones will often
drop in a throw-away line that looks innocuous but
if you pick up on it, they make it the main topic
and quietly snip any discussion of the data that
falsifies their theory.

His comment that the exponential is observed strikes
me as such an attempt to deflect the conversation so
I did not intend to take the bait.


Didn't he make that claim right from the start? So how
can it be an attempt to distract?


Since you asked, I have brought up the point in my
latest post so maybe he will address it, but I really
want to stick to seeing whether he can identify any
cosmological model based on tired light that can
explain the frequency spectrum of the CMBR.


It's really hard to get anything quantitative out of him...


I really wonder if there are some severe reading comprehension
problems on his side, or if he does do that fully consciously,
for trolling...



I have found frequently with cranks and trolls that
they have so convinced themselves of their case that
they will read web pages, books and posts to mean
what they expect you to say without making much
attempt to actually understand the text. His reading
of the Ned Wright graph I cited is a case in point.
He assumed it was talking of a distant source and
therefore not relevant when, if he had looked and
considered carefully, he should have realised it was
talking of a local source.

Now that might be just carelessness or it might be
a deliberate ploy to try to discredit the argument,
but for the real cranks I know it is a self-imposed
blindness, they cannot allow such an idea to form
in their minds unless they already have a way to
rationalise it away. It is fscinating to take one
through all the steps needed to disprove their theory
without giving the game away and get them to agree
each step, then put them together at the end and show
how the combination rules out their idea. Suddenly
things that were obvious and agreed become
unacceptable as their minds rebel against the logic.


Sounds like Morton's demon:
http://www.talkorigins.org/origins/postmonth/feb02.html
(be prepared that greywolf will now cry that I use ad hominems
by comparing him to creationists...)


[snip]


Bye,
Bjoern

  #85  
Old December 22nd 04, 08:22 PM
George Dishman
external usenet poster
 
Posts: n/a
Default


"Bjoern Feuerbacher" wrote in message
...
George Dishman wrote:
[This may not thread correctly, my server rejected
a reply as the references line was too long.]

"Bjoern Feuerbacher" wrote in message
...

Then why did you never ask him why he keeps talking about
a "detected exponential curve"?



I have had many conversations with people putting
up unconventional theories. Usually I will point
out some flaw where observation rules out what they
propose and there will be some discussion of that
until they start to realise they are having
difficulty finding a flaw in my argument. Once that
happens, in my experience, they start trying to
change the subject. The smarter ones will often
drop in a throw-away line that looks innocuous but
if you pick up on it, they make it the main topic
and quietly snip any discussion of the data that
falsifies their theory.

His comment that the exponential is observed strikes
me as such an attempt to deflect the conversation so
I did not intend to take the bait.


Didn't he make that claim right from the start? So how
can it be an attempt to distract?


Actually you are right, he did, and mentioned it
again later after the thread had drifted. I really
haven't decided whether he is a troll or whether
it just appears that way because of his debating
style. The test for me is if he is willing to lay
aside those aspects and really look at the physics.

I have found frequently with cranks and trolls that
they have so convinced themselves of their case that
they will read web pages, books and posts to mean
what they expect you to say without making much
attempt to actually understand the text. ...


snip

Sounds like Morton's demon:
http://www.talkorigins.org/origins/postmonth/feb02.html


I haven't seen that before, it's perfect, exactly
the behaviour I was describing, thanks.

(be prepared that greywolf will now cry that I use ad hominems
by comparing him to creationists...)


I could understand he might, but was really thinking
of Gerald Kelleher and Aladar Stolmar and a few
others. I haven't decided about greywolf yet. He
might genuinely not have understood Wright's argument
so I give him the benefit of the doubt on principle
so far. Time will tell if he is willing to really
look at the physics instead of trying to win a
debating contest. You never know, he might just be
able to come up with a model that fits the data.

George


  #86  
Old December 23rd 04, 11:01 PM
greywolf42
external usenet poster
 
Posts: n/a
Default

George Dishman wrote in message
...

"greywolf42" wrote in message
.. .
George Dishman wrote in message
...


I'm out of time tonight so I'll just answer one point
that relates to my other post:

the observation
that z is independent of frequency means that
photons must lose the same fraction of their
energy in travelling a given distance or

dE/E = -dr/R


Yep, that is the fundamental requirement in TL theory. Since Olber
first came up with the idea, over a century ago.

where R is a characteristic length related to
H_0.


Oopsy. No. There is no relationship whatsoever between H_0 and tired
light extinction distance. There *IS* no H_0 (or H) in tired light
theory.


H_0 has the empirical value 71km/s per MPc which
means dE/E is about 0.024% per MPc hence R is
about 4.2GPc.


Ah! Now I see where you are going. I think you are approaching this a bit
sideways, but we should be able use this approach.

However, my objection above to your use of H_0, was because H_0 is a
theoretical value of the BB theory, not an empirical relation. (What is
actually measured is *redshift* versus distance -- not speed per distance.)
But since you are then working with dE/E, we don't need to worry about the
difference, here.

Integrate and you get the exponential form
for frequency f versus distance r:

f = f_0 * e^(-r/R)


Yes, that is an exponential result.


And every photon loses about 63% of its energy
every 4.2GPc, true?


The current value of the slope of H_0 includes some specific BB assumptions.
However, for the purposes of this exercise, I will accept your values are in
the ballpark (roughly a factor of 2, if I converted correctly).

Sidenote: LeSagians and tired light types usually use the variable mu;
which may be calculated from material/aether properties (EM and
gravitational). We typically don't use the resulting characteristic
distance, R (which is back-calculated, or ad hoc). R and mu are inversely
related.

--
greywolf42
ubi dubium ibi libertas
{remove planet for return e-mail}



  #87  
Old December 23rd 04, 11:01 PM
greywolf42
external usenet poster
 
Posts: n/a
Default

George Dishman wrote in message
...

"greywolf42" wrote in message
.. .
George Dishman wrote in message
...


{snip higher levels}

I can only guess but I suspect he wasn't aware of the
formal definition initially


The "formal definition" of the Hubble constant was made by Eddington, in
the 1930s. It didn't have an H_0, IIRC.


Citation please.


Certainly. I don't own a copy of Eddington's book. So, I'll provide this
excerpt from "Einstein and the History of Relativity", Howard & Stachel ed.
"Cosmology from 1917 to 1960", by George Ellis. Section 2.6.2 begins:

"REDSHIFT-DISTANCE RELATION. Meanwhile observations to consolidate the data
on the Hubble diagram, confirm the linear velocity-distance relation, and
determine the slope of this relation ("the Hubble constant') proceeded
apace. The linear relation was criticized by Shapley (1929), but confirmed
by de Sitter (1931b). The data were extended by Hubble and Humanson (1931),
using a chain of distance indicators and observations extending the redshift
range by a factor of 5 to 32 x 10^6 pc. They determined a value of H = 558
km/sec/Mpc for the Hubble constant. ..."

"By 1936, Hubble had accumulated considerable data, summarized in his superb
book THE REALM OF THE NEBLUAE, determining a value of 526 km/sec/Mpc for the
Hubble constant. However, in all this workd the interpretation of the
observed relation was still uncertain; the velocities presented were
regarded as APPARENT VELOCITIES (see Section 2.7). ..."

{Italics in original as all capitals, above.}

I should also provide summation of the "present day" (1986) understanding of
the BB model, from the same source (section titled "preliminaries"):

"The present-day state of such a universe is characterized by three
parameters: the Hubble constant H_0 - (R*/R)_0, the deceleration parameter
q_0 = -(R'/R)_0 (H_0)^-2, and the present total density parameter rho_0
(which may represent contributions from various matter components); these
quantities being related to Gamma by
q_0 = (kappa rho_0 c^2 / 2 - Gamma) / 3 H_0^2,
where kappa is the gravitational constant and c the speed of light."

Note the slight difference in terminology between H and H_0 in the different
quotes above; by the same author (Ellis). Both are claimed to be the Hubble
constant. "H" is apparently not Hubble's variable choice.

I am aware he showed the low z limit was
proportional from observation but I haven't seen his formal
statement of the law for other than the low-z case.


Eddington didn't *have* one for other than what we call the 'low-z case'
today. That's the point.

and now pretends that he
thinks it a later addition to avoid admitting his
argument is baseless.


To which "definition" were you referring? Citation please.


OK, I've provided a citation last round, and this round.

Where is yours?

Hubble
certainly didn't define it. To quote Lubin and Sandage (2001):
"Hubble's reticence, even as late as 1953, to accept the expansion as
real is explained as due to his use of equations for distances and
absolute magnitudes of redshifted galaxies that do not conform
to the modern Mattig equations of the standard model."

In short, the definition has changed since 1953.


In short you have a comrehension problem. The quote clearly
states that "even as late as 1953" he was not using "the
modern Mattig equations" hence if there was ever a change,
it was before 1953.
However, the Mattig equations relate distance to observables
which is not the Hubble Law so your quote is relevant anyway.


I await your citation on when the change occurred.

snip exponential equations - see my previous reply


Well, the only curves I have looked at are the redshift-
magnitude curves; so I can't rule out that the curve relating
distance and red shift is indeed exponential.


???? The citation was given earlier in the thread, back on November
10th. You can look at the data yourself.

=================
For a quick reference, see Perlmutter, Figure 3, Physics Today, April
2003,
"Supernovae, Dark Energy, and the Accelerating Universe".
http://www.slac.stanford.edu/econf/C...perlmutter.pdf

Just notice that instead of "accelerating universe" and "decelerating
universe" (which require a linear assumption), one should read:
"exponential
redshift-distance relation" and "inverse exponential redshift-distance
relation," respectively. ...


small snip - see below


But I have never
heard that (only from you two, especially from greywolf), and
such a result would surprise me, in light of the theory I know.


The object of science is to compare theory with observation. Yet,
Bjoern is well aware that the redshift-distance curve deviates away
from the old BB predictions. ....


Bjoern is well aware that the redshift-magnitude curve deviates
from the simpler prediction but that isn't a redshift-distance
curve.


???? It is just as much a redshift-distance curve as Hubble's initial
straight-line (and all other, subsequent lines/curves). The luminosity
(magnitude) is related to distance in standard-candle fashion.

The missing link is how tired light predicts the magnitude
will vary with distance.


Already provided, numerous times. Why do you say it is "missing?"

snip from above

... Pure Hubble constant (linear assumption) lies on
the straight line.


However, you have since agreed that this claim of
linearity was wrong:


Nope.

"greywolf42" wrote in message
.. .
George Dishman wrote in message
...

...
What is described as "dark energy" is the unknown
cause of the _deviation_ from the predicted _non-linearity_
which, in the conventional model,


Yes, we agree.


I was agreeing that dark energy was a deviation from the "conventional" BB
model. Not an agreement that the standard model was non-linear.

Magnitude is more difficult. It would presumably
follow from simple inverse square loss


Tired light falls off below the inverse square loss.


What causes the additional reduction and what is
the formula for it?


Already given, numerous times in this thread.

The cause is fractional removal of energy with incremental distance (the
physical cause of the fractional removal varies with the theory). The
resulting formula is I = I_0 exp (-mu x).

Without that, you cannot claim
that tired light predicts the Perlmutter results
since the distance is inferred from the magnitude.


The distance is inferred from pure 1/r^2 assumption in Perlmutter -- using
the SN as standard candles. Redshift is (in BB theory) used to calculate
distances. According to tired light theory, the plot of redshift (apparent
distance) and luminosity will deviate from a straight line in the
characteristic exponential fashion.

but that would fail the Tolman test.


Not surprising, since the TT is purely theoretical, and back-calculated
from BB theory.


It deviates from the inverse square law due to time
dilation and the headlight effect, both of which are
predicted by relativity, not derived from cosmology.


That *IS* part of BB theory. But again, the TT will have to await time for
me to hit the library.

snip background


The hand-wave solution is:
"The solution is that we also have an observed quantity for which we
need assume nothing about the cosmology in order to determine it.
This is the observed surface brightness, obtained from equation (4)
using only the observed angular radius and the observed apparent
magnitude. This observed quantity contains almost all of the
Tolman signal with only a slight dependence on M and linear R.


Nothing hand-waving about it. You do the test on an
observable value in which the cosmological factors
cancel out.

As others have said:
"The problem with performing the test is however that we have to use
galaxies at large distances, which are affected by evolutionary effects
for which we have limited information." Can anyone say "adjustible
parameters?"


Perlmutter's papers on SNe also place upper limits on
the effect of evolution.


That's good. But irrelevant after adjusting all the parameters to match the
BB.

Some other mechanism
is needed for a Tired Light theory that corrects
the magnitude, such as intergalactic dust.


Extinction is usually an adjustible parameter. The amount of extinction
is a very difficult problem, and can be made to fit almost any desired
theory.


It is hard to get away from an exponential form that
depends only on the density of the absorbing material.


Why would you want to get away from it?

However, you claim tired light predicts the SNe data
so what value and equation for extinction were used
in that prediction?


As noted numerous times, prior to this. Many TL theories only predicted the
form of the curve (the direction and observed apparent beginnings of an
exponential deviation). All use the I = I_0 exp(- mu x) equation. Only a
few attempted to determine mu, prior the the SN data.

However, to get enough absorption, the mean
density can reach high values depending on the
substance proposed. The subject can get very
complex so I have generally avoided discussing it
as without detailed proposals on the material and
considerable analysis I doubt we could reach any
conclusions. It's not really necessary anyway as
the macroscopic properties can be used to
disprove the versions I've seen.


Well, earlier, you claimed that any version with "exponential energy
removal" were disproved.


Still repeating the same old fiction I see.


Well, in your view, *DO* the "macroscopic properties" disprove any version
with exponential energy removal? (Regardless of whether you made this claim
earlier, or not.)

snip


Aside from punting significant aspects of history; L&S once again trot
out an old myth (popularized by Misner, Thorne, and Wheeler; but
never quantified), that "As critics still point out, any scattering
process with energy transfer from the photon beam to the scattering
medium, as required for a redshift, must broaden (deflect) the beam.
This effect would cause images of distant galaxies to be fuzzier than
their local counterparts, which they are not." The claim of "fuzziness"
requires photons to scatter off of interstellar matter like little
bowling balls.


Yes, that's why they said "any SCATTERING process ...".

It is not based on a
single tired light assumption ... anywhere.


It applies to those that used scattering to redden the
light.


But *NO* tired light theory uses scattering! A scattering theory would not
be called "tired light". Tired light theories are called that because the
light *itself* (photon or wave) loses energy *intrinsically.* Scattering
rocesses are normal extinction.

The only people that even CALL photon scattering a "tired light" theory are
people like Ned Wright -- who only use the claim as a strawman.

--
greywolf42
ubi dubium ibi libertas
{remove planet for return e-mail}



  #88  
Old December 24th 04, 08:47 AM
George Dishman
external usenet poster
 
Posts: n/a
Default


"greywolf42" wrote in message
...
George Dishman wrote in message
...

snip
H_0 has the empirical value 71km/s per MPc which
means dE/E is about 0.024% per MPc hence R is
about 4.2GPc.


Ah! Now I see where you are going. I think you are approaching this a bit
sideways, but we should be able use this approach.

However, my objection above to your use of H_0, was because H_0 is a
theoretical value of the BB theory, not an empirical relation.


The value isn't theoretical at all, in the sense
that it could notbe predicted from theory alone.

(What is
actually measured is *redshift* versus distance -- not speed per
distance.)


Exactly. Having made the measurement of redshift
versus distance, in a model that explains the
redshift by expansion or motion, the measurements
can most meaningfully be expressed as speed per
distance while in a tired light interpretation
the same information makes more sense in the form
of a characteristic distance, or mu if you like,
but it is easy to switch between those units.
That's why I said the values were "related".

But since you are then working with dE/E, we don't need to worry about the
difference, here.

snip
And every photon loses about 63% of its energy
every 4.2GPc, true?


The current value of the slope of H_0 includes some specific BB
assumptions.


The best current value uses the angular power
spectrum measured by WMAP and that method I agree
is likely only to be applicable in a BB model.
However, the older technique of measurements of
nearby source for which distances can be found
from the distance ladder using parallax, Cepeids
and so on is equally valid for determining the
constant in tired light models. The difference
is that the uncertainty will be higher.

However, for the purposes of this exercise, I will accept your values are
in
the ballpark (roughly a factor of 2, if I converted correctly).


That's another test you can apply to a tired light
model. A factor of 2 is probably about as much as
the uncertainty would allow but it would be hard
to say without looking at the detail of the
determination so I'm happy to accept that for the
moment.

Sidenote: LeSagians and tired light types usually use the variable mu;
which may be calculated from material/aether properties (EM and
gravitational). We typically don't use the resulting characteristic
distance, R (which is back-calculated, or ad hoc). R and mu are inversely
related.


Your last two sentences appear to be in conflict.
If mu can be derived from the theory, then just
tkae the uinverse and you have a theoretical value
for R. That can then be compared to the observed
value described above as a test of the theory.
Neither value is ad hoc.

George



  #89  
Old December 24th 04, 01:22 PM
George Dishman
external usenet poster
 
Posts: n/a
Default


"greywolf42" wrote in message
...
George Dishman wrote in message
...

"greywolf42" wrote in message
.. .
George Dishman wrote in message
...


{snip higher levels}

I can only guess but I suspect he wasn't aware of the
formal definition initially

The "formal definition" of the Hubble constant was made by Eddington,
in
the 1930s. It didn't have an H_0, IIRC.


Citation please.


Certainly. I don't own a copy of Eddington's book.


Neither do I, and I haven't found his paper on-line.

So, I'll provide this
excerpt from "Einstein and the History of Relativity", Howard & Stachel
ed.
"Cosmology from 1917 to 1960", by George Ellis. Section 2.6.2 begins:

"REDSHIFT-DISTANCE RELATION. Meanwhile observations to consolidate the
data
on the Hubble diagram, confirm the linear velocity-distance relation, and
determine the slope of this relation ("the Hubble constant') proceeded
apace. The linear relation was criticized by Shapley (1929), but
confirmed
by de Sitter (1931b). The data were extended by Hubble and Humanson
(1931),
using a chain of distance indicators and observations extending the
redshift
range by a factor of 5 to 32 x 10^6 pc. They determined a value of H =
558
km/sec/Mpc for the Hubble constant. ..."

"By 1936, Hubble had accumulated considerable data, summarized in his
superb
book THE REALM OF THE NEBLUAE, determining a value of 526 km/sec/Mpc for
the
Hubble constant. However, in all this workd the interpretation of the
observed relation was still uncertain; the velocities presented were
regarded as APPARENT VELOCITIES (see Section 2.7). ..."

{Italics in original as all capitals, above.}


I think it is customary to use /obliques/ to indicate
italics but it's less common than say asterisks for bold.

I should also provide summation of the "present day" (1986) understanding
of
the BB model, from the same source (section titled "preliminaries"):

"The present-day state of such a universe is characterized by three
parameters: the Hubble constant H_0 - (R*/R)_0, the deceleration parameter
q_0 = -(R'/R)_0 (H_0)^-2, and the present total density parameter rho_0
(which may represent contributions from various matter components); these
quantities being related to Gamma by
q_0 = (kappa rho_0 c^2 / 2 - Gamma) / 3 H_0^2,
where kappa is the gravitational constant and c the speed of light."

Note the slight difference in terminology between H and H_0 in the
different
quotes above; by the same author (Ellis). Both are claimed to be the
Hubble
constant. "H" is apparently not Hubble's variable choice.


The key phrase that illustrates my point is "apparent
velocities". When Hubble first published, he was noting
an entirely empirical relationship. In a steady state
universe, one might expect the redshift from galaxies
to be random regardless of distance and solely due to
gravitational redshift. Galaxies heavier than ours would
show a redshift while those lighter would be blueshifted.
Any equation can be approximated as a power series but
his data was too scattered to determine the second order
coefficient but good enough to rule out a non-zero
first order term. That alone was worthy of publication.

To go beyond the observation would mean making an
assumption about the cause and something other than
speed would extrapolate differently.

A naive extrapolation would be that the speed of a
galaxy remained constant over time. With that model
separations would be zero at some finite time in the
past. At half that time, all the galaxies would have
the same speed as at present but be at half their
present distance. The Hubble constant would then hve
twice the present value. However, if you consider the
effect of gravity, the speeds in the past would be
higher, meaning the Hubble constant would be even
higher. That means that Hubble would have been aware
that the value he published would vary with time.

Ellis's use of H in the initial text relating to Hubble's
determination of the first order coefficient but H_0
when discussing the equations that involve the time
dependence is entirely understandable.

I am aware he showed the low z limit was
proportional from observation but I haven't seen his formal
statement of the law for other than the low-z case.


Eddington didn't *have* one for other than what we call the 'low-z case'
today. That's the point.


That's my point too. Hubble's data covered such a
small range of time that treating it as a constant
was legitimate, but trying to suggest higher order
coefficients could be determined from data with
such a large spread would not have survived peer
review.

If you wanted to extrapolate from the low-z data to
a general equation based on expansion, then you would
use GR and in that H would be time dependent. I don't
know who first did that. Certainly your previous
citation shows that Mattig did so in the 1950's, but
I expect it was done long before that. I don't believe
anyone ever published a version in which H was
independent of time before Guth suggested inflation
as a solution for the problems with the CMBR.

and now pretends that he
thinks it a later addition to avoid admitting his
argument is baseless.

To which "definition" were you referring? Citation please.


OK, I've provided a citation last round, and this round.

Where is yours?

snip
However, the Mattig equations relate distance to observables
which is not the Hubble Law so your quote is relevant anyway.


I await your citation on when the change occurred.


My argument is that there there never was a change.
Even assuming constant speed and a linear relation
between speed and distance, the Hubble constant
varies with time. I can't offer you a citation to
prove that nothing suggesting otherwise was
published. I guess one approach might be to look at
Alan Guth's paper to see if he discusses any prior
work on exponentially increasing speed (which is
what a non-time varying Hubble Constant implies).

snip
The object of science is to compare theory with observation. Yet,
Bjoern is well aware that the redshift-distance curve deviates away
from the old BB predictions. ....


Bjoern is well aware that the redshift-magnitude curve deviates
from the simpler prediction but that isn't a redshift-distance
curve.


???? It is just as much a redshift-distance curve as Hubble's initial
straight-line (and all other, subsequent lines/curves). The luminosity
(magnitude) is related to distance in standard-candle fashion.

The missing link is how tired light predicts the magnitude
will vary with distance.


Already provided, numerous times. Why do you say it is "missing?"


If you are saying it is just inverse square and the energy
reduction due to red-shift then you have said that before
but I thought you were also saying there was a reduction
due to extinction and you haven't given details of that.

snip from above

... Pure Hubble constant (linear assumption) lies on
the straight line.


However, you have since agreed that this claim of
linearity was wrong:


Nope.

"greywolf42" wrote in message
.. .
George Dishman wrote in message
...

...
What is described as "dark energy" is the unknown
cause of the _deviation_ from the predicted _non-linearity_
which, in the conventional model,

Yes, we agree.


I was agreeing that dark energy was a deviation from the "conventional" BB
model. Not an agreement that the standard model was non-linear.


I underlined _non-linearity_ specifically to draw your
attention to that part of my post. Oh well, perhaps
my comments above will help explain it. Even the
simplest model of constant speed for individual
galaxies means time variation of the Hubble constant
and a non-linear relatoion between red-shift and
distance.

snip
The cause is fractional removal of energy with incremental distance (the
physical cause of the fractional removal varies with the theory). The
resulting formula is I = I_0 exp (-mu x).


OK, I thought you relied on a degree of extinction as
well.

snip more on this - my mistake

snip Tolman Test. You might like to find out about it
but I don't want to be guilty of what I discussed with
Bjoern, changing the subject before we sort out the
details. Note the Tolman Test is very difficult and not
very strong

Well, in your view, *DO* the "macroscopic properties" disprove any version
with exponential energy removal? (Regardless of whether you made this
claim earlier, or not.)


In my view, it falsifies Lerner's "intergalactis fog" as
the source of the CMBR, it probably falsifies TVF's thermal
equilibrium with a corpuscular aether ("Elyson particles")
though that is perhaps easier to falsify by observing red
shift at frequencies below the CMBR peak, and it falsifies
Aladar's integrated starlight.

More to the point, I believe it could falsify your aether
defect theory if you were to discuss it in sufficient
detail to allow a quantitative analysis.

Aside from punting significant aspects of history; L&S once again trot
out an old myth (popularized by Misner, Thorne, and Wheeler; but
never quantified), that "As critics still point out, any scattering
process with energy transfer from the photon beam to the scattering
medium, as required for a redshift, must broaden (deflect) the beam.
This effect would cause images of distant galaxies to be fuzzier than
their local counterparts, which they are not." The claim of
"fuzziness"
requires photons to scatter off of interstellar matter like little
bowling balls.


Yes, that's why they said "any SCATTERING process ...".

It is not based on a
single tired light assumption ... anywhere.


It applies to those that used scattering to redden the
light.


But *NO* tired light theory uses scattering! A scattering theory would
not
be called "tired light". Tired light theories are called that because the
light *itself* (photon or wave) loses energy *intrinsically.* Scattering
processes are normal extinction.


I disagree but it is perhaps just terminology. Extinction
is discussed by Perlmutter ss "grey dust". The normal dust
scattering reddens the light from distant objects by
scattering blue more than red, but that is an intensity
effect that leaves spectral lines unshifted. I think
some people have suggested that small-angle scattering can
reduce the energy of individual photons moving spectral
lines and it is this variant of tired light that Wright was
addressing. The "tired light" description in my experience
is a generic term that includes both the types using
intrinsic energy loss as well as though in which some
external agent is involved. That may explain why you
didn't understand my talking of subsets earlier.

The only people that even CALL photon scattering a "tired light" theory
are
people like Ned Wright -- who only use the claim as a strawman.


I haven't looked at who suggested it but the first link I
found on trying gives a number of pointers:

http://www.eitgaastra.nl/timesgr/part1/2.html


"According to Zwicky's tired light hypothesis the
vibrations of light are steadily slowed down over
long periods of time travelling through the universe,
and so the redshift is the result of fatigue."

That is what you seem to regard as tired light, however
the page goes on to say:

"[May 2003: Last year I noticed that many people have
suggested the same tired light idea, i.e. light loses
energy because of interaction with other (gravity/ether)
entities in extragalactic space, see for example
professor Assis2, professor Ghosh3, Dr. Van Flandern9
and various authors in Pushing Gravity5. End May 2003]"

and specifically:

"[October 2003: Also a mechanism like Compton scattering
may be classified as a tired light concept75. Compton
scattering is scattering of photons by particles (like
electrons and protons) distributed in space, which are
believed to result in energy loses and wavelengths that
are redshifted in proportion to distance travelled. See
also an article by Assis and Neves76 (next to Mitchell's
book75) if you want to know more about the history and
variety of tired light concepts. End October 2003]"

"[May 2004: Professor Wright rules out Compton shift as a
tired light model option, because Compton shift (for
instance by electrons) would change the momentum of a
photon, which would lead to a blurring of distant objects
which is not observed94. He may be right about this.
However, tired light caused by ether/gravity particles is
something completely different"

So I think Tired Light is a more all-encomapssing term.

George


  #90  
Old December 24th 04, 03:01 PM
N:dlzc D:aol T:com \(dlzc\)
external usenet poster
 
Posts: n/a
Default

Gentlemen:

"George Dishman" wrote in message
...

"greywolf42" wrote in message
...

....
The "formal definition" of the Hubble constant was made by Eddington,
in
the 1930s. It didn't have an H_0, IIRC.

Citation please.


Certainly. I don't own a copy of Eddington's book.


Neither do I, and I haven't found his paper on-line.


I can perhaps complete some references anyway:
URL:http://www.iop.org/EJ/abstract/0959-5309/44/1/303
A presidential address, titled "The expanding universe", 1932
(I don't have access)

URL:http://www.phys-astro.sonoma.edu/Bru...sts/Eddington/
.... a little tiny bit on his life.

URL:http://www.amazon.com/exec/obidos/tg...?v=glance&st=*
"The Expanding Universe : Astronomy's 'Great Debate', 1900-1931"
published in 1933 (and I've seen this date as 1931, 1932, and 1933)
.... be sure and click on "more"

David A. Smith


 




Thread Tools
Display Modes

Posting Rules
You may not post new threads
You may not post replies
You may not post attachments
You may not edit your posts

vB code is On
Smilies are On
[IMG] code is On
HTML code is Off
Forum Jump

Similar Threads
Thread Thread Starter Forum Replies Last Post
Breakthrough in Cosmology Kazmer Ujvarosy SETI 8 May 26th 04 04:45 PM
Breakthrough in Cosmology Kazmer Ujvarosy Astronomy Misc 3 May 22nd 04 08:07 AM


All times are GMT +1. The time now is 05:12 PM.


Powered by vBulletin® Version 3.6.4
Copyright ©2000 - 2025, Jelsoft Enterprises Ltd.
Copyright ©2004-2025 SpaceBanter.com.
The comments are property of their posters.