![]() |
|
|
|
Thread Tools | Display Modes |
|
#2
|
|||
|
|||
![]()
In article , Jos Bergervoet
writes: On 19/10/15 10:17 PM, Steve Willner wrote: In article , "Jonathan Thornburg [remove -animal to reply]" writes: The preprint is 1909.06712 Two additional preprints are at https://arxiv.org/abs/1907.04869 and https://arxiv.org/abs/1910.06306 ... ... One other note from the talk: it takes an expert modeler about 8 months to a year to model a single lens system. Shajib and others are trying to automate the modeling, You obviously do not mean that they do it by pencil and paper at this moment. Right; it's done on computers these days. :-) So why is modeling labor-intensive? Isn't it just putting a point mass in front of the observed object, which only requires fitting the precise position and distance of the point mass using the observed image? A point mass could be done with pencil and paper. (And if so, is the actual imaging with the point mass in some place the difficult part?) Or is the problem that the lensing object may be more extended than a point mass? (Or is it something worse!?) [[Mod. note -- In these cases the lensing object is a galaxy (definitely not a point mass!). For precise results a nontrivial model of the galaxy's mass distribution (here parameterized by the (anisotropic) velocity dispersion of stars in the lensing galaxy's central region) is needed, which is the tricky (& hence labor-intensive) part. -- jt]] Right. In addition to the time delay, which depends on the potential, one fits the image positions, which depend on the derivative of the potential, and can also choose to fit the brightness of the images, which depends on the second derivative of the potential. (Since the brightness can be affected by microlensing, one might choose not to fit for it, or to include a model of microlensing as well.) If the source is resolved, then the brightness distribution of the source also plays a role. Also, one can (and, these days, probably must) relax the assumption that there is only the lens which affects the light paths. While in most cases a single-plane lens is a good enough approximation, the assumption that the background metric is FLRW might not be. In particular, if the path is underdense (apart from the part in the lens plane, which of course is very overdense), then the distance as a function of redshift is not that which is given by the standard Friedmann model. At this level of precision, it's probably not enough to simply parameterize this, but rather one needs some model of the mass distribution near the beams. The devil is in the details. Think of the Hubble constant as determined by the traditional methods (magnitude--redshift relation). In theory, one needs ONE object whose redshift (this is actually quite easy) and distance are known in order to compute it. In practice, of course, there is much more involved (mostly details of the calibration of the distance ladder), though this is still relatively straightforward compared to a detailed lens model. |
#3
|
|||
|
|||
![]()
In article ,
"Phillip Helbig (undress to reply)" writes: At this level of precision, it's probably not enough to simply parameterize this, but rather one needs some model of the mass distribution near the beams. That's exactly right (at least to the extent I understood Shajib's talk). In particular, one has to take into account the statistical distribution of mass all along and near the light path and also (as others wrote) the mass distribution of the lensing galaxy itself. It's even worse than that in systems that have multiple galaxies contributing to the lensing. Not only do their individual mass distributions matter, their relative distances along the line of sight are uncertain and must be modeled. Presumably all that can be automated -- at the cost of many extra cpu cycles -- but it hasn't been done yet. -- Help keep our newsgroup healthy; please don't feed the trolls. Steve Willner Phone 617-495-7123 Cambridge, MA 02138 USA |
#4
|
|||
|
|||
![]()
In article , (Steve
Willner) writes: In article , "Phillip Helbig (undress to reply)" writes: At this level of precision, it's probably not enough to simply parameterize this, but rather one needs some model of the mass distribution near the beams. That's exactly right (at least to the extent I understood Shajib's talk). In particular, one has to take into account the statistical distribution of mass all along and near the light path and also (as others wrote) the mass distribution of the lensing galaxy itself. These effects, i.e. that the mass in the universe is at least partially distributed clumpily (apart from the gravitational lens itself, which is, essentially by definition, a big clump), also influence the luminosity distance, which of course can be used to determine not just the Hubble constant but also the other cosmological parameters. However, it's not as big a worry, for several reasons: As far as the Hubble constant goes, the distances are, cosmologically speaking, relatively small, whereas the effects of such small-scale inhomogeneities increase with redshift. Whether at low redshift for the Hubble constant or at high redshift for the other parameters, usually several objects, over a range of redshifts, are used. This has two advantages. One is that these density fluctuations might (for similar redshifts) average out in some sense. The other is that the degeneracy is broken because several redshifts are involved. (If the inhomogeneity is an additional parameter which can also affect the distance as calculated from redshift, with just one object at one redshift one can't tell what effect it has, but since the dependence on redshift is different for the inhomogeneities, the Hubble constant, and the other parameters, then some of the degeneracy is broken.) At the level of precision required today, simply describing the effect of small-scale inhomogeneities with one parameter is not good enough. It does allow one to get an idea of the possible size of the effect, though. To improve, there are two approaches. One is to try to measure the mass along the line of sight, e.g. by weak lensing. Another is to have some model of structure formation and calculate what it must be, at least in a statistical sense. There is a huge literature on this topic, though it is usually not mentioned in more-popular presentations. I even wrote a couple of papers myself on this topic: http://www.astro.multivax.de:8000/he...ons/info/etas= nia.html http://www.astro.multivax.de:8000/he...ons/info/etas= nia2.html |
#5
|
|||
|
|||
![]() |
#6
|
|||
|
|||
![]()
In article ,
Jos Bergervoet writes: Yes! So why are only 20 people attending?! Attendance was far higher than that. The video shows only one side of the main floor of the room, and the other side is far more popular (perhaps because it has a better view of the screen). There's a balcony as well, and quite a few people leave at the end of the talk and before the question period. I didn't count, but I think the attendance was close to 100. Anyway it was about the normal number for a colloquium here. The colloquium list for the fall is at https://www.cfa.harvard.edu/colloquia if you want to see what other topics have been covered. To the question in another message, I don't see why some local perturbation -- presumably abnormally low matter density around our location -- wouldn't solve the problem in principle, but if this were a viable explanation, I expect the speaker would have mentioned it. It's not as though no one has thought about the problem. The difficulty is probably the magnitude of the effect. I don't work in this area, though, so my opinion is not worth much. -- Help keep our newsgroup healthy; please don't feed the trolls. Steve Willner Phone 617-495-7123 Cambridge, MA 02138 USA [[Mod. note -- I apologise for the delay in posting this article, which was submitted on Fri, 8 Nov 2019 21:15:25 +0000. -- jt]] |
#7
|
|||
|
|||
![]()
In article , Steve Willner
writes: To the question in another message, I don't see why some local perturbation -- presumably abnormally low matter density around our location -- wouldn't solve the problem in principle, but if this were a viable explanation, I expect the speaker would have mentioned it. It's not as though no one has thought about the problem. The difficulty is probably the magnitude of the effect. I don't work in this area, though, so my opinion is not worth much. I'm sure that someone must have looked at it, but is the measured Hubble constant the same in all directions on the sky? (I remember Sandage saying that even Hubble had found that it was, but I mean today, with much better data, where small effects are noticeable.) If it is, then such a density variation could be an explanation (assuming that it would otherwise work) only if we "just happened" to be sitting at the centre of such a local bubble. Of course, some of us remember when the debate was not between 67 and 72, but between 50 and 100, with occasional suggestions of 42 (really) or even 30. And both the "high camp" and "low camp" claimed uncertainties of about 10 per cent. That wasn't a debate over whether one used "local" or "large-scale" methods to measure it, but rather the deference depended on who was doing the measuring. Nevertheless, it is conceivable that there is some unknown systematic uncertainty* in one of the measurements. --- * For some, "unknown systematic uncertainty" is a tautology. Others, however, include systematic uncertainties as part of the uncertainty budget. (Some people use "error" instead of "uncertainty". The latter is, I think, more correct, though in this case perhaps some unknown ERROR is the culprit. |
#8
|
|||
|
|||
![]()
In article , "Richard D.
Saam" writes: The Ho data is tightening: ** Testing Low-Redshift Cosmic Acceleration with Large-Scale Structure https://arxiv.org/abs/2001.11044 Seshadri Nadathur, Will J. Percival, Florian Beutler, and Hans A. Winther Phys. Rev. Lett. 124, 221301 - Published 2 June 2020 we measure the Hubble constant to be Ho = 72.3 +/- 1.9 km/sec Mpc from BAO + voids at z2 and Ho = 69.0 +/- 1.2 km/sec Mpc from BAO when adding Lyman alpha at BAO at z=2.34 ** I guess it depends on what you mean by "tightening". If one measurement is X with uncertainty A, and another Z with uncertainty C, and they are 5 sigma apart, then someone measures, say, Y with uncertainty B, which is between the other two and compatible with both within 3 sigma, that doesn't mean that Y is correct. Of course, if someone does measure that, they will probably publish it, while someone measuring something, say, 5 sigma below the lowest measurement, or above the highest, might be less likely to do so. It could be that Y is close to the true value, but perhaps all are wrong, or X is closer, or Z. The problem can be resolved only if one understands why the measurements differ by more than a reasonable amount. |
|
Thread Tools | |
Display Modes | |
|
|
![]() |
||||
Thread | Thread Starter | Forum | Replies | Last Post |
The universe is younger and expanding faster than we thought, | a425couple | Misc | 5 | May 1st 19 06:41 PM |
The Moon: 100M years younger than thought | Brad Guth[_3_] | Misc | 16 | September 26th 13 12:48 PM |
Planck finds the Universe is a little older than thought | Yousuf Khan[_2_] | Astronomy Misc | 4 | March 22nd 13 11:46 PM |
Famous Martian meteorite younger than thought | Sam Wormley[_2_] | Amateur Astronomy | 0 | April 16th 10 06:10 AM |
Can "13 billion" yr old planet actually be younger? | Roger Stokes | Research | 1 | July 23rd 03 10:20 PM |