View Single Post
  #27  
Old September 18th 04, 01:03 PM
Steve Willner
external usenet poster
 
Posts: n/a
Default

(sean) wrote in message ...
I got 4.8 because I used only 12 measurements from the same day
or..day 50817 as I have previously pointed out , whereas you have
included one measurement from the previous day which was one day
before peak


Near peak, the brightness is hardly changing, so it doesn't much
matter which points you include to determine the maximum brightness.

The first point I would like to clarify is this: Am I correct in
assuming that the lightcurve templates used in the graphs are made
by introducing a stretch factor `s` into each lightcurve template?


Before we get to that, let's take a step back and see where we are.
What I've done is tell you how to interpret the data well enough for
a "sanity check" to see that time dilation is obvious. If you want to
do a real analysis, you are going to have to learn a lot more about
data analysis in general and astronomical data in particular: I'd say
about the equivalent of two years of graduate school. That is not
something you will get on Usenet.

If so then these templates are not rest frame, but rest frame +
time dilation. Which would make them biased towards time dilation.


The templates obviously include time dilation; otherwise they wouldn't
fit the data! This does not make them biased. I suggested using the
templates because the work is done for you, and it is easy to see what
is going on. However, if you don't like that, what you need to do is
find the slope of the light curve on the part well after maximum,
where the brightness (in magnitude units) is declining by a constant
amount per day. Compare the slopes for near and distant SNe at the
same magnitudes relative to maximum light. For example, you might
compare the time to go from (max plus 0.75 mag) to (max plus 1.5 mag).
The problem with this is that the distant SNe cannot always be
followed to 1.5 mag fainter than maximum, so I suggested the templates
as a shortcut. They do, after all, incorporate everything we know
about light curves based on nearby SNe.

There is nothing magic about the specific numbers 0.75, 1.5 mag, but
it looks to me as though they ought to work OK. The actual fitting
procedure is left to you, but be sure you take the error bars into
account.

SW Look at the error bars! The "1.2" is highly uncertain. The two
HST
SW measurements around the same time have much smaller uncertainties.
SW Those are what establish the peak of the light curve, which
SW conveniently is put at 1.0 in the graph.

You are completely wrong here and appear to have made yet
another `elementary error`.


This is not a very good way to solicit my help, and in view of the
past history of this thread, you would be wise to adopt a more
skeptical attitude towards your own conclusions. I am beginning to
suspect that you are advocating a preconceived notion rather than
trying to understand the data.

If you want to look at just
the HST measurements you`ll notice that the first HST reading
is 3.8. The second HST reading is the highest at 3.89 (1.0)
Thats the peak observation.


Please look again at what I wrote about error bars and (in the bit you
snipped) about the light curve being flat near maximum. You cannot
simply take the largest single measurement and call it the "time of
maximum." For 1997ek, all the measurements near "day 0" are
consistent with constant brightness lasting 8 days. The "time of
maximum" is in there somewhere. You can either determine it from the
template or else bypass determining the time of maximum altogether by
dealing only with the slope.

And the next or third HST reading,
is at 1.59 (0.4) and its 22 days after the peak reading !


If we use your definition of peak, it is the "5.89" at day 50817.6, 29
days before the HST "1.54" value. (As noted above, this is not a
useful way to determine the time of maximum.)

Regarding the other ground based measurements. You ignore the
fact that its not 1 single ground based measurement at 1.2
with `error margins`. But rather, *13* different, seperate
observations over 2 days that average out to 1.2. And if you
say that 3.9 is 1.0 linear flux then at least 3 of those 13
observations do not include 1.0 within their error margins


This is normal (pun not intended, but I'll leave it in). The error
bars given are standard errors, often called "one sigma." If the
measurement errors are Gaussian (and they should be close to that),
about 1/3 of all measurements will be outside the +/- one sigma
boundaries. This is undergraduate-level data analysis. You will have
to learn at this level before even starting those two years of
graduate school I mentioned.

Furthermore you havent commented on the R band measurements
from 1998as (roughly comparable to rest frame B band) which
very clearly give a 20 day decay for 1 mag from 1 to 0.4 as
compared to an expected 19.


I make it about 22 days. Certainly not as short as 17, which is what
1995E shows at B. In fact, 17*(1.35/1.01) = 22.7, which looks like
pretty good agreement to me. I don't see how you can possibly make
the decay as short as 20 days, let alone 17.

There
is no way even with error margins taken into account that
these observations can support anything but
a `no time dilation `argument.


I would take out the "no." This again gives me the impression that
your mind is made up, regardless of what the data show.

Or for instance the I band 1998ba where a 1 mag decay from
1.0 to 0.4 gives about 32 days which compares at z=0.43 to
a 569nm lightcurve which is about 27 days


The I light curve isn't well sampled; there are no measurements at all
near peak. For R, the decay time from template-fitting is 20 days; we
expect 24 days from 1995E. However, there's a big error bar on the
first measurement, meaning the time of maximum is poorly determined.
This is one where you have to look at the slope of the decay. And of
course for a sanity check, one is best advised to look at the SNe with
the highest redshifts.

I have neither time nor desire to deal with all the others. I have
given you a roadmap; it's up to you to use it or not.