![]() |
|
|
|
Thread Tools | Display Modes |
#1
|
|||
|
|||
![]()
Accordingly to this article:
https://medium.com/the-cosmic-compan...t-e8a649a32ec8 "Is the Universe Younger than We Thought?", is the age of the universe, not 13,8 billion years, but 11 billion years old. This seems, to me, a rather big shift, specific because it is based on gravitational lensing. Nicolaas Vroom [[Mod. note -- This article is based on this press release "High value for Hubble constant from two gravitational lenses" https://www.mpa-garching.mpg.de/743539/news20190913 which in turn describes this research paper "A measurement of the Hubble constant from angular diameter distances to two gravitational lenses" https://science.sciencemag.org/content/365/6458/1134 which is nicely synopsized in this commentary "An expanding controversy" /An independently calibrated measurement fortifies the debate around Hubble's constant/ https://science.sciencemag.org/content/365/6458/1076 Figure 6 of the /Science/ research article gives a nice comparison of some of the recent Hubble-constant measurements, showing that the choice of cosmological model (at least within the range of models considered by the authors) makes rather little difference. -- jt]] |
#2
|
|||
|
|||
![]()
Details at
https://www.iau.org/news/pressreleas.../iau1910/?lang This one looks like a comet, but observations are just beginning. -- Help keep our newsgroup healthy; please don't feed the trolls. Steve Willner Phone 617-495-7123 Cambridge, MA 02138 USA |
#3
|
|||
|
|||
![]()
In article ,
Nicolaas Vroom writes: Accordingly to this article: https://medium.com/the-cosmic-compan...ger-than-we-t= hought-e8a649a32ec8 "Is the Universe Younger than We Thought?", is the age of the universe, not 13,8 billion years, but 11 billion years old. This seems, to me, a rather big shift, specific because it is based on gravitational lensing. All else being equal, the age of the universe is inversely proportional to the Hubble constant. The headline doesn't deserve any prizes. There are many measurements of the Hubble constant, and the field has a history of discrepant measurement (i.e. measurements which differ by significantly more than their formal uncertainties). Recently, the debate has shifted from "50 or 100?" to "67 or 73?" but since the formal uncertainties have also gone down, one could argue that the "tension" is comparable to that in the old days. There is more than one measurement supporting 67, and more than one supporting 73. So, ONE additional measurement doesn't mean "the textbooks will have to be rewritten" or some such nonsense, but rather is an additional piece of information which must be taken into account. It should be noted that there are many measurements of the Hubble constant from gravitational lenses. Not all agree. The biggest source of uncertainty is probably the fact that the result depends on knowing the mass distribution of the lens galaxy. For what it's worth, I am co-author on a paper doing this sort of thing: http://www.astro.multivax.de:8000/he...ons/info/0218= ..html Our value back then, almost 20 years ago, was 69+13/-19 at 95% confidence. The first two authors recently revised this after re-analysing the data, arriving at 72+/-2.6 at 1 sigma, though this includes a better (published in 2004) lens model as well. The papers are arXiv:astro-ph/9811282 and arXiv:1802.10088. Both are published in MNRAS (links to freely accessible versions are at the arXiv references above). It's tricky to get right. As Shapley said, "No one trusts a model except the man who wrote it; everyone trusts an observation except the man who made it." :-) The above uses just the gravitational-lens system to measure the Hubble constant. Such measurements have also been made before for the two lens systems mentioned in the press release. What one actually measures is basically the distance to the lens. Since the redshift is known, one knows the distance for this particular redshift; knowing the redshift and the distance gives the Hubble constant. In the new work, this was then used to calibrate supernovae of with known redshifts. (Determining the Hubble constant from the magnitude-redshift relation for supernovae is also possible, of course (and higher-order effects allow one to determine the cosmological constant and the density parameter (independently of the Hubble constant), for which the 2011 Nobel Prize was awarded), but one needs to know the absolute luminosity, which has to be calibrated in some way. Since they measure the distance at two separate redshifts, the cosmology cancels out (at least within the range of otherwise reasonable models). Their value is 82+/-8, which is consistent with the current "high" measurements. There are many reasons to doubt that the universe is only 11 billion years old, so a value of 73 is probably about right. The MPA press release is more carefully worded: "While the uncertainty is still relatively large" and notes that the value that that inferred from the CMB. However, many would say that the anomaly is that the CMB (in particular the Planck data) seem to indicate a low value. Figure 6 of the /Science/ research article gives a nice comparison of some of the recent Hubble-constant measurements, showing that the choice of cosmological model (at least within the range of models considered by the authors) makes rather little difference. -- jt]] In principle, the cosmological model can make a difference, but these days we believe that the values of lambda and Omega have been narrowed down enough that there isn't much room to move; measuring the distance at two different redshift essentially pins it down. |
#4
|
|||
|
|||
![]()
In article ,
Nicolaas Vroom writes: Accordingly to this article: https://medium.com/the-cosmic-compan...t-e8a649a32ec8 which in turn describes this research paper "A measurement of the Hubble constant from angular diameter distances to two gravitational lenses" https://science.sciencemag.org/content/365/6458/1134 The paper is behind a paywall, but the Abstract, which is public, summarizes the results. Two gravitational lenses at z=0.295 and 0.6304 are used to calibrate SN distances. The derived Hubble- Lemaitre parameter H_0 is 82+/-8, about 1 sigma larger than other local determinations and 1.5 sigma larger than the Planck value. As Phillip wrote, the observations have their uncertainties, but 50 or so lenses would measure H_0 independently of other methods. -- Help keep our newsgroup healthy; please don't feed the trolls. Steve Willner Phone 617-495-7123 Cambridge, MA 02138 USA [[Mod. note -- I've now found the preprint -- it's arXiv:1906.06712. Sorry for not including that in my original mod.note. -- jt]] |
#5
|
|||
|
|||
![]()
Steve Willner wrote:
which in turn describes this research paper "A measurement of the Hubble constant from angular diameter distances to two gravitational lenses" https://science.sciencemag.org/content/365/6458/1134 The paper is behind a paywall, but the Abstract, which is public, summarizes the results. [[...]] In a moderator's note, I wrote [[Mod. note -- I've now found the preprint -- it's arXiv:1906.06712. Sorry for not including that in my original mod.note. -- jt]] Oops, /dev/brain parity error. The preprint is 1909.06712 repeat 1909.06712. Sorry for the mixup. -- Jonathan |
#6
|
|||
|
|||
![]()
In article ,
"Jonathan Thornburg [remove -animal to reply]" writes: The preprint is 1909.06712 Two additional preprints are at https://arxiv.org/abs/1907.04869 and https://arxiv.org/abs/1910.06306 These report direct measurements of gravitational lens distances rather than a recalibration of the standard distance ladder. The lead author Shajib of 06306 spoke here today and showed an updated version of Fig 12 of the 04869 preprint. The upshot is that the discrepancy between the local and the CMB measurements of H_0 is between 4 and 5.7 sigma, depending on how conservative one wants to be about assumptions. The impression I got is that either there's a systematic error somewhere or there's new physics. The local H_0 is based on two independent methods -- distance ladder and lensing -- so big systematic errors in local H_0 seem unlikely. The CMB H_0 is based on Planck with WMAP having given an H_0 value more consistent with the local one. "New physics" could be something as simple as time-varying dark energy, but for now it's too soon to say much. One other note from the talk: it takes an expert modeler about 8 months to a year to model a single lens system. Shajib and others are trying to automate the modeling, but until that's done, measuring a large sample of lenses will be labor-intensive. Even then, it will be cpu-intensive. Shahib mentioned 1 million cpu-hours for his model of DES J0408-53545354, and about 40 lenses are needed to give the desired precision of local H_0. -- Help keep our newsgroup healthy; please don't feed the trolls. Steve Willner Phone 617-495-7123 Cambridge, MA 02138 USA |
#8
|
|||
|
|||
![]()
On 19/10/15 10:17 PM, Steve Willner wrote:
In article , "Jonathan Thornburg [remove -animal to reply]" writes: The preprint is 1909.06712 Two additional preprints are at https://arxiv.org/abs/1907.04869 and https://arxiv.org/abs/1910.06306 ... ... One other note from the talk: it takes an expert modeler about 8 months to a year to model a single lens system. Shajib and others are trying to automate the modeling, You obviously do not mean that they do it by pencil and paper at this moment. So why is modeling labor-intensive? Isn't it just putting a point mass in front of the observed object, which only requires fitting the precise position and distance of the point mass using the observed image? (And if so, is the actual imaging with the point mass in some place the difficult part?) Or is the problem that the lensing object may be more extended than a point mass? (Or is it something worse!?) -- Jos [[Mod. note -- In these cases the lensing object is a galaxy (definitely not a point mass!). For precise results a nontrivial model of the galaxy's mass distribution (here parameterized by the (anisotropic) velocity dispersion of stars in the lensing galaxy's central region) is needed, which is the tricky (& hence labor-intensive) part. -- jt]] |
#9
|
|||
|
|||
![]()
In article , Jos Bergervoet
writes: On 19/10/15 10:17 PM, Steve Willner wrote: In article , "Jonathan Thornburg [remove -animal to reply]" writes: The preprint is 1909.06712 Two additional preprints are at https://arxiv.org/abs/1907.04869 and https://arxiv.org/abs/1910.06306 ... ... One other note from the talk: it takes an expert modeler about 8 months to a year to model a single lens system. Shajib and others are trying to automate the modeling, You obviously do not mean that they do it by pencil and paper at this moment. Right; it's done on computers these days. :-) So why is modeling labor-intensive? Isn't it just putting a point mass in front of the observed object, which only requires fitting the precise position and distance of the point mass using the observed image? A point mass could be done with pencil and paper. (And if so, is the actual imaging with the point mass in some place the difficult part?) Or is the problem that the lensing object may be more extended than a point mass? (Or is it something worse!?) [[Mod. note -- In these cases the lensing object is a galaxy (definitely not a point mass!). For precise results a nontrivial model of the galaxy's mass distribution (here parameterized by the (anisotropic) velocity dispersion of stars in the lensing galaxy's central region) is needed, which is the tricky (& hence labor-intensive) part. -- jt]] Right. In addition to the time delay, which depends on the potential, one fits the image positions, which depend on the derivative of the potential, and can also choose to fit the brightness of the images, which depends on the second derivative of the potential. (Since the brightness can be affected by microlensing, one might choose not to fit for it, or to include a model of microlensing as well.) If the source is resolved, then the brightness distribution of the source also plays a role. Also, one can (and, these days, probably must) relax the assumption that there is only the lens which affects the light paths. While in most cases a single-plane lens is a good enough approximation, the assumption that the background metric is FLRW might not be. In particular, if the path is underdense (apart from the part in the lens plane, which of course is very overdense), then the distance as a function of redshift is not that which is given by the standard Friedmann model. At this level of precision, it's probably not enough to simply parameterize this, but rather one needs some model of the mass distribution near the beams. The devil is in the details. Think of the Hubble constant as determined by the traditional methods (magnitude--redshift relation). In theory, one needs ONE object whose redshift (this is actually quite easy) and distance are known in order to compute it. In practice, of course, there is much more involved (mostly details of the calibration of the distance ladder), though this is still relatively straightforward compared to a detailed lens model. |
#10
|
|||
|
|||
![]()
In article ,
"Phillip Helbig (undress to reply)" writes: At this level of precision, it's probably not enough to simply parameterize this, but rather one needs some model of the mass distribution near the beams. That's exactly right (at least to the extent I understood Shajib's talk). In particular, one has to take into account the statistical distribution of mass all along and near the light path and also (as others wrote) the mass distribution of the lensing galaxy itself. It's even worse than that in systems that have multiple galaxies contributing to the lensing. Not only do their individual mass distributions matter, their relative distances along the line of sight are uncertain and must be modeled. Presumably all that can be automated -- at the cost of many extra cpu cycles -- but it hasn't been done yet. -- Help keep our newsgroup healthy; please don't feed the trolls. Steve Willner Phone 617-495-7123 Cambridge, MA 02138 USA |
|
Thread Tools | |
Display Modes | |
|
|
![]() |
||||
Thread | Thread Starter | Forum | Replies | Last Post |
The universe is younger and expanding faster than we thought, | a425couple | Misc | 5 | May 1st 19 06:41 PM |
The Moon: 100M years younger than thought | Brad Guth[_3_] | Misc | 16 | September 26th 13 12:48 PM |
Planck finds the Universe is a little older than thought | Yousuf Khan[_2_] | Astronomy Misc | 4 | March 22nd 13 11:46 PM |
Famous Martian meteorite younger than thought | Sam Wormley[_2_] | Amateur Astronomy | 0 | April 16th 10 06:10 AM |
Can "13 billion" yr old planet actually be younger? | Roger Stokes | Research | 1 | July 23rd 03 10:20 PM |