View Single Post
  #9  
Old October 16th 19, 06:57 PM posted to sci.astro.research
Phillip Helbig (undress to reply)[_2_]
external usenet poster
 
Posts: 273
Default Is the Universe Younger than We Thought?

In article , Jos Bergervoet
writes:

On 19/10/15 10:17 PM, Steve Willner wrote:
In article ,
"Jonathan Thornburg [remove -animal to reply]"
writes:

The preprint is 1909.06712


Two additional preprints are at
https://arxiv.org/abs/1907.04869 and
https://arxiv.org/abs/1910.06306

...
...
One other note from the talk: it takes an expert modeler about 8 months
to a year to model a single lens system. Shajib and others are trying
to automate the modeling,


You obviously do not mean that they do it by pencil and paper at this
moment.


Right; it's done on computers these days. :-)

So why is modeling labor-intensive? Isn't it just putting a
point mass in front of the observed object, which only requires fitting
the precise position and distance of the point mass using the observed
image?


A point mass could be done with pencil and paper.

(And if so, is the actual imaging with the point mass in some
place the difficult part?) Or is the problem that the lensing object
may be more extended than a point mass? (Or is it something worse!?)


[[Mod. note -- In these cases the lensing object is a galaxy (definitely
not a point mass!). For precise results a nontrivial model of the
galaxy's mass distribution (here parameterized by the (anisotropic)
velocity dispersion of stars in the lensing galaxy's central region)
is needed, which is the tricky (& hence labor-intensive) part.
-- jt]]


Right.

In addition to the time delay, which depends on the potential, one fits
the image positions, which depend on the derivative of the potential,
and can also choose to fit the brightness of the images, which depends
on the second derivative of the potential. (Since the brightness can be
affected by microlensing, one might choose not to fit for it, or to
include a model of microlensing as well.) If the source is resolved,
then the brightness distribution of the source also plays a role.

Also, one can (and, these days, probably must) relax the assumption that
there is only the lens which affects the light paths. While in most
cases a single-plane lens is a good enough approximation, the assumption
that the background metric is FLRW might not be. In particular, if the
path is underdense (apart from the part in the lens plane, which of
course is very overdense), then the distance as a function of redshift
is not that which is given by the standard Friedmann model. At this
level of precision, it's probably not enough to simply parameterize
this, but rather one needs some model of the mass distribution near the
beams.

The devil is in the details.

Think of the Hubble constant as determined by the traditional methods
(magnitude--redshift relation). In theory, one needs ONE object whose
redshift (this is actually quite easy) and distance are known in order
to compute it. In practice, of course, there is much more involved
(mostly details of the calibration of the distance ladder), though this
is still relatively straightforward compared to a detailed lens model.