90 views

Skip to first unread message

Sep 24, 2019, 12:41:53 AM9/24/19

to

Accordingly to this article:

https://medium.com/the-cosmic-companion/is-the-universe-younger-than-we-thought-e8a649a32ec8

"Is the Universe Younger than We Thought?", is the age of the universe,

not 13,8 billion years, but 11 billion years old.

This seems, to me, a rather big shift, specific because it is based on

gravitational lensing.

Nicolaas Vroom

[[Mod. note -- This article is based on this press release

"High value for Hubble constant from two gravitational lenses"

https://www.mpa-garching.mpg.de/743539/news20190913

which in turn describes this research paper

"A measurement of the Hubble constant from angular diameter distances

to two gravitational lenses"

https://science.sciencemag.org/content/365/6458/1134

which is nicely synopsized in this commentary

"An expanding controversy"

/An independently calibrated measurement fortifies the debate

around Hubble's constant/

https://science.sciencemag.org/content/365/6458/1076

Figure 6 of the /Science/ research article gives a nice comparison

of some of the recent Hubble-constant measurements, showing that the

choice of cosmological model (at least within the range of models

considered by the authors) makes rather little difference.

-- jt]]

https://medium.com/the-cosmic-companion/is-the-universe-younger-than-we-thought-e8a649a32ec8

"Is the Universe Younger than We Thought?", is the age of the universe,

not 13,8 billion years, but 11 billion years old.

This seems, to me, a rather big shift, specific because it is based on

gravitational lensing.

Nicolaas Vroom

[[Mod. note -- This article is based on this press release

"High value for Hubble constant from two gravitational lenses"

https://www.mpa-garching.mpg.de/743539/news20190913

which in turn describes this research paper

"A measurement of the Hubble constant from angular diameter distances

to two gravitational lenses"

https://science.sciencemag.org/content/365/6458/1134

which is nicely synopsized in this commentary

"An expanding controversy"

/An independently calibrated measurement fortifies the debate

around Hubble's constant/

https://science.sciencemag.org/content/365/6458/1076

Figure 6 of the /Science/ research article gives a nice comparison

of some of the recent Hubble-constant measurements, showing that the

choice of cosmological model (at least within the range of models

considered by the authors) makes rather little difference.

-- jt]]

Sep 24, 2019, 2:17:05 PM9/24/19

to

Details at

https://www.iau.org/news/pressreleases/detail/iau1910/?lang

This one looks like a comet, but observations are just beginning.

--

Help keep our newsgroup healthy; please don't feed the trolls.

Steve Willner Phone 617-495-7123 swil...@cfa.harvard.edu

Cambridge, MA 02138 USA

https://www.iau.org/news/pressreleases/detail/iau1910/?lang

This one looks like a comet, but observations are just beginning.

--

Help keep our newsgroup healthy; please don't feed the trolls.

Steve Willner Phone 617-495-7123 swil...@cfa.harvard.edu

Cambridge, MA 02138 USA

Sep 24, 2019, 4:00:08 PM9/24/19

to

In article <71609d81-f915-4d7d...@googlegroups.com>,

> https://medium.com/the-cosmic-companion/is-the-universe-younger-than-we-t=

to the Hubble constant.

The headline doesn't deserve any prizes. There are many measurements of

the Hubble constant, and the field has a history of discrepant

measurement (i.e. measurements which differ by significantly more than

their formal uncertainties). Recently, the debate has shifted from "50

or 100?" to "67 or 73?" but since the formal uncertainties have also

gone down, one could argue that the "tension" is comparable to that in

the old days. There is more than one measurement supporting 67, and

more than one supporting 73. So, ONE additional measurement doesn't

mean "the textbooks will have to be rewritten" or some such nonsense,

but rather is an additional piece of information which must be taken

into account.

It should be noted that there are many measurements of the Hubble

constant from gravitational lenses. Not all agree. The biggest source

of uncertainty is probably the fact that the result depends on knowing

the mass distribution of the lens galaxy.

For what it's worth, I am co-author on a paper doing this sort of thing:

http://www.astro.multivax.de:8000/helbig/research/publications/info/0218=

.html

Our value back then, almost 20 years ago, was 69+13/-19 at 95%

confidence. The first two authors recently revised this after

re-analysing the data, arriving at 72+/-2.6 at 1 sigma, though this

includes a better (published in 2004) lens model as well. The papers

are arXiv:astro-ph/9811282 and arXiv:1802.10088. Both are published in

MNRAS (links to freely accessible versions are at the arXiv references

above). It's tricky to get right. As Shapley said, "No one trusts a

model except the man who wrote it; everyone trusts an observation except

the man who made it." :-)

The above uses just the gravitational-lens system to measure the Hubble

constant. Such measurements have also been made before for the two lens

systems mentioned in the press release. What one actually measures is

basically the distance to the lens. Since the redshift is known, one

knows the distance for this particular redshift; knowing the redshift

and the distance gives the Hubble constant. In the new work, this was

then used to calibrate supernovae of with known redshifts. (Determining

the Hubble constant from the magnitude-redshift relation for supernovae

is also possible, of course (and higher-order effects allow one to

determine the cosmological constant and the density parameter

(independently of the Hubble constant), for which the 2011 Nobel Prize

was awarded), but one needs to know the absolute luminosity, which has

to be calibrated in some way. Since they measure the distance at two

separate redshifts, the cosmology cancels out (at least within the range

of otherwise reasonable models). Their value is 82+/-8, which is

consistent with the current "high" measurements. There are many reasons

to doubt that the universe is only 11 billion years old, so a value of

73 is probably about right.

The MPA press release is more carefully worded: "While the uncertainty

is still relatively large" and notes that the value that that inferred

from the CMB. However, many would say that the anomaly is that the CMB

(in particular the Planck data) seem to indicate a low value.

> Figure 6 of the /Science/ research article gives a nice comparison

> of some of the recent Hubble-constant measurements, showing that the

> choice of cosmological model (at least within the range of models

> considered by the authors) makes rather little difference.

> -- jt]]

In principle, the cosmological model can make a difference, but these

days we believe that the values of lambda and Omega have been narrowed

down enough that there isn't much room to move; measuring the distance

at two different redshift essentially pins it down.

> https://medium.com/the-cosmic-companion/is-the-universe-younger-than-we-t=

hought-e8a649a32ec8

> "Is the Universe Younger than We Thought?", is the age of the universe,

> not 13,8 billion years, but 11 billion years old.

> This seems, to me, a rather big shift, specific because it is based on

> gravitational lensing.

All else being equal, the age of the universe is inversely proportional
> "Is the Universe Younger than We Thought?", is the age of the universe,

> not 13,8 billion years, but 11 billion years old.

> This seems, to me, a rather big shift, specific because it is based on

> gravitational lensing.

to the Hubble constant.

The headline doesn't deserve any prizes. There are many measurements of

the Hubble constant, and the field has a history of discrepant

measurement (i.e. measurements which differ by significantly more than

their formal uncertainties). Recently, the debate has shifted from "50

or 100?" to "67 or 73?" but since the formal uncertainties have also

gone down, one could argue that the "tension" is comparable to that in

the old days. There is more than one measurement supporting 67, and

more than one supporting 73. So, ONE additional measurement doesn't

mean "the textbooks will have to be rewritten" or some such nonsense,

but rather is an additional piece of information which must be taken

into account.

It should be noted that there are many measurements of the Hubble

constant from gravitational lenses. Not all agree. The biggest source

of uncertainty is probably the fact that the result depends on knowing

the mass distribution of the lens galaxy.

For what it's worth, I am co-author on a paper doing this sort of thing:

http://www.astro.multivax.de:8000/helbig/research/publications/info/0218=

.html

Our value back then, almost 20 years ago, was 69+13/-19 at 95%

confidence. The first two authors recently revised this after

re-analysing the data, arriving at 72+/-2.6 at 1 sigma, though this

includes a better (published in 2004) lens model as well. The papers

are arXiv:astro-ph/9811282 and arXiv:1802.10088. Both are published in

MNRAS (links to freely accessible versions are at the arXiv references

above). It's tricky to get right. As Shapley said, "No one trusts a

model except the man who wrote it; everyone trusts an observation except

the man who made it." :-)

The above uses just the gravitational-lens system to measure the Hubble

constant. Such measurements have also been made before for the two lens

systems mentioned in the press release. What one actually measures is

basically the distance to the lens. Since the redshift is known, one

knows the distance for this particular redshift; knowing the redshift

and the distance gives the Hubble constant. In the new work, this was

then used to calibrate supernovae of with known redshifts. (Determining

the Hubble constant from the magnitude-redshift relation for supernovae

is also possible, of course (and higher-order effects allow one to

determine the cosmological constant and the density parameter

(independently of the Hubble constant), for which the 2011 Nobel Prize

was awarded), but one needs to know the absolute luminosity, which has

to be calibrated in some way. Since they measure the distance at two

separate redshifts, the cosmology cancels out (at least within the range

of otherwise reasonable models). Their value is 82+/-8, which is

consistent with the current "high" measurements. There are many reasons

to doubt that the universe is only 11 billion years old, so a value of

73 is probably about right.

The MPA press release is more carefully worded: "While the uncertainty

is still relatively large" and notes that the value that that inferred

from the CMB. However, many would say that the anomaly is that the CMB

(in particular the Planck data) seem to indicate a low value.

> Figure 6 of the /Science/ research article gives a nice comparison

> of some of the recent Hubble-constant measurements, showing that the

> choice of cosmological model (at least within the range of models

> considered by the authors) makes rather little difference.

> -- jt]]

days we believe that the values of lambda and Omega have been narrowed

down enough that there isn't much room to move; measuring the distance

at two different redshift essentially pins it down.

Sep 26, 2019, 9:32:40 PM9/26/19

to

> Accordingly to this article:

> https://medium.com/the-cosmic-companion/is-the-universe-younger-than-we-thought-e8a649a32ec8

> https://medium.com/the-cosmic-companion/is-the-universe-younger-than-we-thought-e8a649a32ec8

> which in turn describes this research paper

> "A measurement of the Hubble constant from angular diameter distances

> to two gravitational lenses"

> https://science.sciencemag.org/content/365/6458/1134

The paper is behind a paywall, but the Abstract, which is public,
> "A measurement of the Hubble constant from angular diameter distances

> to two gravitational lenses"

> https://science.sciencemag.org/content/365/6458/1134

summarizes the results. Two gravitational lenses at z=0.295 and

0.6304 are used to calibrate SN distances. The derived Hubble-

Lemaitre parameter H_0 is 82+/-8, about 1 sigma larger than other

local determinations and 1.5 sigma larger than the Planck value.

As Phillip wrote, the observations have their uncertainties, but 50

or so lenses would measure H_0 independently of other methods.

--

Help keep our newsgroup healthy; please don't feed the trolls.

Steve Willner Phone 617-495-7123 swil...@cfa.harvard.edu

Cambridge, MA 02138 USA

Sorry for not including that in my original mod.note. -- jt]]

Sep 26, 2019, 9:56:34 PM9/26/19

to

Steve Willner <wil...@cfa.harvard.edu> wrote:

> > which in turn describes this research paper

> > "A measurement of the Hubble constant from angular diameter distances

> > to two gravitational lenses"

> > https://science.sciencemag.org/content/365/6458/1134

>

> The paper is behind a paywall, but the Abstract, which is public,

> summarizes the results. [[...]]
> > which in turn describes this research paper

> > "A measurement of the Hubble constant from angular diameter distances

> > to two gravitational lenses"

> > https://science.sciencemag.org/content/365/6458/1134

>

> The paper is behind a paywall, but the Abstract, which is public,

In a moderator's note, I wrote

> [[Mod. note -- I've now found the preprint -- it's arXiv:1906.06712.

> Sorry for not including that in my original mod.note. -- jt]]

Oops, /dev/brain parity error. The preprint is 1909.06712
> Sorry for not including that in my original mod.note. -- jt]]

repeat 1909.06712. Sorry for the mixup. -- Jonathan

Oct 15, 2019, 4:17:26 PM10/15/19

to

In article <mt2.1.4-8111...@iron.bkis-orchard.net>,

"Jonathan Thornburg [remove -animal to reply]"

<jth...@astro.indiana-zebra.edu> writes:

> The preprint is 1909.06712

Two additional preprints are at

https://arxiv.org/abs/1907.04869 and

https://arxiv.org/abs/1910.06306

These report direct measurements of gravitational lens distances

rather than a recalibration of the standard distance ladder.

The lead author Shajib of 06306 spoke here today and showed an

updated version of Fig 12 of the 04869 preprint. The upshot is that

the discrepancy between the local and the CMB measurements of H_0 is

between 4 and 5.7 sigma, depending on how conservative one wants to

be about assumptions. The impression I got is that either there's a

systematic error somewhere or there's new physics. The local H_0 is

based on two independent methods -- distance ladder and lensing -- so

big systematic errors in local H_0 seem unlikely. The CMB H_0 is

based on Planck with WMAP having given an H_0 value more consistent

with the local one. "New physics" could be something as simple as

time-varying dark energy, but for now it's too soon to say much.

One other note from the talk: it takes an expert modeler about 8 months

to a year to model a single lens system. Shajib and others are trying

to automate the modeling, but until that's done, measuring a large

sample of lenses will be labor-intensive. Even then, it will be

cpu-intensive. Shahib mentioned 1 million cpu-hours for his model of

DES J0408-53545354, and about 40 lenses are needed to give the desired

precision of local H_0.

"Jonathan Thornburg [remove -animal to reply]"

<jth...@astro.indiana-zebra.edu> writes:

> The preprint is 1909.06712

Two additional preprints are at

https://arxiv.org/abs/1907.04869 and

https://arxiv.org/abs/1910.06306

These report direct measurements of gravitational lens distances

rather than a recalibration of the standard distance ladder.

The lead author Shajib of 06306 spoke here today and showed an

updated version of Fig 12 of the 04869 preprint. The upshot is that

the discrepancy between the local and the CMB measurements of H_0 is

between 4 and 5.7 sigma, depending on how conservative one wants to

be about assumptions. The impression I got is that either there's a

systematic error somewhere or there's new physics. The local H_0 is

based on two independent methods -- distance ladder and lensing -- so

big systematic errors in local H_0 seem unlikely. The CMB H_0 is

based on Planck with WMAP having given an H_0 value more consistent

with the local one. "New physics" could be something as simple as

time-varying dark energy, but for now it's too soon to say much.

One other note from the talk: it takes an expert modeler about 8 months

to a year to model a single lens system. Shajib and others are trying

to automate the modeling, but until that's done, measuring a large

sample of lenses will be labor-intensive. Even then, it will be

cpu-intensive. Shahib mentioned 1 million cpu-hours for his model of

DES J0408-53545354, and about 40 lenses are needed to give the desired

precision of local H_0.

Oct 15, 2019, 4:53:43 PM10/15/19

to

In article <qo590f$c75$1...@dont-email.me>, wil...@cfa.harvard.edu (Steve

Also interesting on this topic: arXiv:1910.02978, which suggests that

the local Cepheid measurements are the odd ones. arXiv 1802.10088

re-analyses data on one lens system, resulting in a slightly longer time

delay and hence slightly lower Hubble constant, i.e. making this

particular system more consistent with the CMB value. Steve mentioned

how long the modelling takes. A modeller has the input data, though;

there is a huge amount of work just to get that far as well: observing,

reducing the data, and so on.

Willner) writes:

> Two additional preprints are at

> https://arxiv.org/abs/1907.04869 and

> https://arxiv.org/abs/1910.06306

> These report direct measurements of gravitational lens distances

> rather than a recalibration of the standard distance ladder.

> Two additional preprints are at

> https://arxiv.org/abs/1907.04869 and

> https://arxiv.org/abs/1910.06306

> These report direct measurements of gravitational lens distances

> rather than a recalibration of the standard distance ladder.

> The upshot is that

> the discrepancy between the local and the CMB measurements of H_0 is

> between 4 and 5.7 sigma, depending on how conservative one wants to

> be about assumptions.

> the discrepancy between the local and the CMB measurements of H_0 is

> between 4 and 5.7 sigma, depending on how conservative one wants to

> be about assumptions.

> "New physics" could be something as simple as

> time-varying dark energy

Now THAT'S an understatement! :-)
> time-varying dark energy

Also interesting on this topic: arXiv:1910.02978, which suggests that

the local Cepheid measurements are the odd ones. arXiv 1802.10088

re-analyses data on one lens system, resulting in a slightly longer time

delay and hence slightly lower Hubble constant, i.e. making this

particular system more consistent with the CMB value. Steve mentioned

how long the modelling takes. A modeller has the input data, though;

there is a huge amount of work just to get that far as well: observing,

reducing the data, and so on.

Oct 15, 2019, 11:49:09 PM10/15/19

to

On 19/10/15 10:17 PM, Steve Willner wrote:

> In article <mt2.1.4-8111...@iron.bkis-orchard.net>,

> "Jonathan Thornburg [remove -animal to reply]"

> <jth...@astro.indiana-zebra.edu> writes:

>

>> The preprint is 1909.06712

>

> Two additional preprints are at

> https://arxiv.org/abs/1907.04869 and

> https://arxiv.org/abs/1910.06306

...
> In article <mt2.1.4-8111...@iron.bkis-orchard.net>,

> "Jonathan Thornburg [remove -animal to reply]"

> <jth...@astro.indiana-zebra.edu> writes:

>

>> The preprint is 1909.06712

>

> Two additional preprints are at

> https://arxiv.org/abs/1907.04869 and

> https://arxiv.org/abs/1910.06306

...

> One other note from the talk: it takes an expert modeler about 8 months

> to a year to model a single lens system. Shajib and others are trying

> to automate the modeling,

You obviously do not mean that they do it by pencil and paper at this
> to a year to model a single lens system. Shajib and others are trying

> to automate the modeling,

moment. So why is modeling labor-intensive? Isn't it just putting a

point mass in front of the observed object, which only requires fitting

the precise position and distance of the point mass using the observed

image? (And if so, is the actual imaging with the point mass in some

place the difficult part?) Or is the problem that the lensing object

may be more extended than a point mass? (Or is it something worse!?)

--

Jos

[[Mod. note -- In these cases the lensing object is a galaxy (definitely

not a point mass!). For precise results a nontrivial model of the

galaxy's mass distribution (here parameterized by the (anisotropic)

velocity dispersion of stars in the lensing galaxy's central region)

is needed, which is the tricky (& hence labor-intensive) part.

-- jt]]

Oct 16, 2019, 1:57:48 PM10/16/19

to

In article <5da63082$0$10257$e4fe...@news.xs4all.nl>, Jos Bergervoet

> So why is modeling labor-intensive? Isn't it just putting a

> point mass in front of the observed object, which only requires fitting

> the precise position and distance of the point mass using the observed

> image?

A point mass could be done with pencil and paper.

> (And if so, is the actual imaging with the point mass in some

> place the difficult part?) Or is the problem that the lensing object

> may be more extended than a point mass? (Or is it something worse!?)

In addition to the time delay, which depends on the potential, one fits

the image positions, which depend on the derivative of the potential,

and can also choose to fit the brightness of the images, which depends

on the second derivative of the potential. (Since the brightness can be

affected by microlensing, one might choose not to fit for it, or to

include a model of microlensing as well.) If the source is resolved,

then the brightness distribution of the source also plays a role.

Also, one can (and, these days, probably must) relax the assumption that

there is only the lens which affects the light paths. While in most

cases a single-plane lens is a good enough approximation, the assumption

that the background metric is FLRW might not be. In particular, if the

path is underdense (apart from the part in the lens plane, which of

course is very overdense), then the distance as a function of redshift

is not that which is given by the standard Friedmann model. At this

level of precision, it's probably not enough to simply parameterize

this, but rather one needs some model of the mass distribution near the

beams.

The devil is in the details.

Think of the Hubble constant as determined by the traditional methods

(magnitude--redshift relation). In theory, one needs ONE object whose

redshift (this is actually quite easy) and distance are known in order

to compute it. In practice, of course, there is much more involved

(mostly details of the calibration of the distance ladder), though this

is still relatively straightforward compared to a detailed lens model.

<jos.ber...@xs4all.nl> writes:

> On 19/10/15 10:17 PM, Steve Willner wrote:

> > In article <mt2.1.4-8111...@iron.bkis-orchard.net>,

> > "Jonathan Thornburg [remove -animal to reply]"

> > <jth...@astro.indiana-zebra.edu> writes:

> >

> >> The preprint is 1909.06712

> >

> > Two additional preprints are at

> > https://arxiv.org/abs/1907.04869 and

> > https://arxiv.org/abs/1910.06306

> ...

> ...

> > One other note from the talk: it takes an expert modeler about 8 months

> > to a year to model a single lens system. Shajib and others are trying

> > to automate the modeling,

>

> You obviously do not mean that they do it by pencil and paper at this

> moment.

Right; it's done on computers these days. :-)
> On 19/10/15 10:17 PM, Steve Willner wrote:

> > In article <mt2.1.4-8111...@iron.bkis-orchard.net>,

> > "Jonathan Thornburg [remove -animal to reply]"

> > <jth...@astro.indiana-zebra.edu> writes:

> >

> >> The preprint is 1909.06712

> >

> > Two additional preprints are at

> > https://arxiv.org/abs/1907.04869 and

> > https://arxiv.org/abs/1910.06306

> ...

> ...

> > One other note from the talk: it takes an expert modeler about 8 months

> > to a year to model a single lens system. Shajib and others are trying

> > to automate the modeling,

>

> You obviously do not mean that they do it by pencil and paper at this

> moment.

> So why is modeling labor-intensive? Isn't it just putting a

> point mass in front of the observed object, which only requires fitting

> the precise position and distance of the point mass using the observed

> image?

> (And if so, is the actual imaging with the point mass in some

> place the difficult part?) Or is the problem that the lensing object

> may be more extended than a point mass? (Or is it something worse!?)

> [[Mod. note -- In these cases the lensing object is a galaxy (definitely

> not a point mass!). For precise results a nontrivial model of the

> galaxy's mass distribution (here parameterized by the (anisotropic)

> velocity dispersion of stars in the lensing galaxy's central region)

> is needed, which is the tricky (& hence labor-intensive) part.

> -- jt]]

Right.
> not a point mass!). For precise results a nontrivial model of the

> galaxy's mass distribution (here parameterized by the (anisotropic)

> velocity dispersion of stars in the lensing galaxy's central region)

> is needed, which is the tricky (& hence labor-intensive) part.

> -- jt]]

In addition to the time delay, which depends on the potential, one fits

the image positions, which depend on the derivative of the potential,

and can also choose to fit the brightness of the images, which depends

on the second derivative of the potential. (Since the brightness can be

affected by microlensing, one might choose not to fit for it, or to

include a model of microlensing as well.) If the source is resolved,

then the brightness distribution of the source also plays a role.

Also, one can (and, these days, probably must) relax the assumption that

there is only the lens which affects the light paths. While in most

cases a single-plane lens is a good enough approximation, the assumption

that the background metric is FLRW might not be. In particular, if the

path is underdense (apart from the part in the lens plane, which of

course is very overdense), then the distance as a function of redshift

is not that which is given by the standard Friedmann model. At this

level of precision, it's probably not enough to simply parameterize

this, but rather one needs some model of the mass distribution near the

beams.

The devil is in the details.

Think of the Hubble constant as determined by the traditional methods

(magnitude--redshift relation). In theory, one needs ONE object whose

redshift (this is actually quite easy) and distance are known in order

to compute it. In practice, of course, there is much more involved

(mostly details of the calibration of the distance ladder), though this

is still relatively straightforward compared to a detailed lens model.

Oct 19, 2019, 4:39:30 AM10/19/19

to

In article <qo67pc$csc$1...@gioia.aioe.org>,

talk). In particular, one has to take into account the statistical

distribution of mass all along and near the light path and also (as

others wrote) the mass distribution of the lensing galaxy

itself. It's even worse than that in systems that have multiple

galaxies contributing to the lensing. Not only do their individual

mass distributions matter, their relative distances along the line of

sight are uncertain and must be modeled.

Presumably all that can be automated -- at the cost of many extra cpu

cycles -- but it hasn't been done yet.

"Phillip Helbig (undress to reply)" <hel...@asclothestro.multivax.de> writes:

> At this level of precision, it's probably not enough to simply

> parameterize this, but rather one needs some model of the mass

> distribution near the beams.

That's exactly right (at least to the extent I understood Shajib's
> At this level of precision, it's probably not enough to simply

> parameterize this, but rather one needs some model of the mass

> distribution near the beams.

talk). In particular, one has to take into account the statistical

distribution of mass all along and near the light path and also (as

others wrote) the mass distribution of the lensing galaxy

itself. It's even worse than that in systems that have multiple

galaxies contributing to the lensing. Not only do their individual

mass distributions matter, their relative distances along the line of

sight are uncertain and must be modeled.

Presumably all that can be automated -- at the cost of many extra cpu

cycles -- but it hasn't been done yet.

Oct 19, 2019, 8:52:44 PM10/19/19

to

In article <qodakb$mhe$1...@dont-email.me>, wil...@cfa.harvard.edu (Steve

distributed clumpily (apart from the gravitational lens itself, which

is, essentially by definition, a big clump), also influence the

luminosity distance, which of course can be used to determine not just

the Hubble constant but also the other cosmological parameters.

However, it's not as big a worry, for several reasons:

As far as the Hubble constant goes, the distances are, cosmologically

speaking, relatively small, whereas the effects of such small-scale

inhomogeneities increase with redshift.

Whether at low redshift for the Hubble constant or at high redshift for

the other parameters, usually several objects, over a range of

redshifts, are used. This has two advantages. One is that these

density fluctuations might (for similar redshifts) average out in some

sense. The other is that the degeneracy is broken because several

redshifts are involved. (If the inhomogeneity is an additional

parameter which can also affect the distance as calculated from

redshift, with just one object at one redshift one can't tell what

effect it has, but since the dependence on redshift is different for the

inhomogeneities, the Hubble constant, and the other parameters, then

some of the degeneracy is broken.)

At the level of precision required today, simply describing the effect

of small-scale inhomogeneities with one parameter is not good enough.

It does allow one to get an idea of the possible size of the effect,

though. To improve, there are two approaches. One is to try to measure

the mass along the line of sight, e.g. by weak lensing. Another is to

have some model of structure formation and calculate what it must be, at

least in a statistical sense.

There is a huge literature on this topic, though it is usually not

mentioned in more-popular presentations.

I even wrote a couple of papers myself on this topic:

http://www.astro.multivax.de:8000/helbig/research/publications/info/etas=

nia.html

http://www.astro.multivax.de:8000/helbig/research/publications/info/etas=

nia2.html

Willner) writes:

> In article <qo67pc$csc$1...@gioia.aioe.org>,

> "Phillip Helbig (undress to reply)" <hel...@asclothestro.multivax.de>

> writes:

> > At this level of precision, it's probably not enough to simply

> > parameterize this, but rather one needs some model of the mass

> > distribution near the beams.

>

> That's exactly right (at least to the extent I understood Shajib's

> talk). In particular, one has to take into account the statistical

> distribution of mass all along and near the light path and also (as

> others wrote) the mass distribution of the lensing galaxy

> itself.

These effects, i.e. that the mass in the universe is at least partially
> In article <qo67pc$csc$1...@gioia.aioe.org>,

> "Phillip Helbig (undress to reply)" <hel...@asclothestro.multivax.de>

> writes:

> > At this level of precision, it's probably not enough to simply

> > parameterize this, but rather one needs some model of the mass

> > distribution near the beams.

>

> That's exactly right (at least to the extent I understood Shajib's

> talk). In particular, one has to take into account the statistical

> distribution of mass all along and near the light path and also (as

> others wrote) the mass distribution of the lensing galaxy

> itself.

distributed clumpily (apart from the gravitational lens itself, which

is, essentially by definition, a big clump), also influence the

luminosity distance, which of course can be used to determine not just

the Hubble constant but also the other cosmological parameters.

However, it's not as big a worry, for several reasons:

As far as the Hubble constant goes, the distances are, cosmologically

speaking, relatively small, whereas the effects of such small-scale

inhomogeneities increase with redshift.

Whether at low redshift for the Hubble constant or at high redshift for

the other parameters, usually several objects, over a range of

redshifts, are used. This has two advantages. One is that these

density fluctuations might (for similar redshifts) average out in some

sense. The other is that the degeneracy is broken because several

redshifts are involved. (If the inhomogeneity is an additional

parameter which can also affect the distance as calculated from

redshift, with just one object at one redshift one can't tell what

effect it has, but since the dependence on redshift is different for the

inhomogeneities, the Hubble constant, and the other parameters, then

some of the degeneracy is broken.)

At the level of precision required today, simply describing the effect

of small-scale inhomogeneities with one parameter is not good enough.

It does allow one to get an idea of the possible size of the effect,

though. To improve, there are two approaches. One is to try to measure

the mass along the line of sight, e.g. by weak lensing. Another is to

have some model of structure formation and calculate what it must be, at

least in a statistical sense.

There is a huge literature on this topic, though it is usually not

mentioned in more-popular presentations.

I even wrote a couple of papers myself on this topic:

http://www.astro.multivax.de:8000/helbig/research/publications/info/etas=

nia.html

http://www.astro.multivax.de:8000/helbig/research/publications/info/etas=

nia2.html

Nov 2, 2019, 4:50:26 AM11/2/19

to

In article <qo590f$c75$1...@dont-email.me>, I wrote:

> The upshot is that the discrepancy between the local and the CMB

> measurements of H_0 is between 4 and 5.7 sigma, depending on how

> conservative one wants to be about assumptions.

We had another colloquium on the subject yesterday. Video at
> The upshot is that the discrepancy between the local and the CMB

> measurements of H_0 is between 4 and 5.7 sigma, depending on how

> conservative one wants to be about assumptions.

https://www.youtube.com/watch?v=K1496gv8KCo

The points I took away are: 1. both the local ("direct") measurements

and the distant ("indirect") measurements are made by two

_independent_ methods, which agree in each case. That is, the two

direct methods (SNe, lensing) agree with each other, and the two

indirect methods (CMB, something complicated) agree with each other,

but the direct and indirect measurements disagree.

2. contrary to what I wrote earlier, even a non-physical change of

dark energy with time (say an abrupt increase at some fine-tuned

epoch) cannot fix the disagreement.

3. while there have been several suggestion for new physics to fix

the problem, none of them so far seems to work without disagreeing

with other data.

What fun!

Nov 2, 2019, 12:59:04 PM11/2/19

to

On 19/11/02 9:50 AM, Steve Willner wrote:

> ...

...> 1. both the local ("direct") measurements

(at about z=10^(10) in the video, I believe..) and the answer given is

that it cannot be an abrupt change, "it must be smooth". The presenter's

answer seems to invoke (partly) other observations that rule it out. (So

change in dark energy might fix it but create new disagreements, which

would bring it in category 3, below.. Or would the discrepancy already

be in matching the data actually discussed here?)

> 3. while there have been several suggestion for new physics to fix

> the problem, none of them so far seems to work without disagreeing

> with other data.

>

> What fun!

Yes! So why are only 20 people attending?!

--

Jos

> ...

...> 1. both the local ("direct") measurements

> and the distant ("indirect") measurements are made by two

> _independent_ methods, which agree in each case. That is, the two

> direct methods (SNe, lensing) agree with each other, and the two

> indirect methods (CMB, something complicated) agree with each other,

> but the direct and indirect measurements disagree.

>

> 2. contrary to what I wrote earlier, even a non-physical change of

> dark energy with time (say an abrupt increase at some fine-tuned

> epoch) cannot fix the disagreement.

Indeed someone asks this question at http://youtu.be/K1496gv8KCo?t=3785
> _independent_ methods, which agree in each case. That is, the two

> direct methods (SNe, lensing) agree with each other, and the two

> indirect methods (CMB, something complicated) agree with each other,

> but the direct and indirect measurements disagree.

>

> 2. contrary to what I wrote earlier, even a non-physical change of

> dark energy with time (say an abrupt increase at some fine-tuned

> epoch) cannot fix the disagreement.

(at about z=10^(10) in the video, I believe..) and the answer given is

that it cannot be an abrupt change, "it must be smooth". The presenter's

answer seems to invoke (partly) other observations that rule it out. (So

change in dark energy might fix it but create new disagreements, which

would bring it in category 3, below.. Or would the discrepancy already

be in matching the data actually discussed here?)

> 3. while there have been several suggestion for new physics to fix

> the problem, none of them so far seems to work without disagreeing

> with other data.

>

> What fun!

--

Jos

Nov 4, 2019, 2:58:31 AM11/4/19

to

On 02 Nov 2019, wil...@cfa.harvard.edu (Steve Willner) wrote:

>3. while there have been several suggestion for new physics to fix

>the problem, none of them so far seems to work without disagreeing

>with other data. ... What fun!
>3. while there have been several suggestion for new physics to fix

>the problem, none of them so far seems to work without disagreeing

How about a migrating space-time curvature? Can that connect the two

places? It would have the effect of making distant places dimmer.

Try it in a static universe also, its effects simulate FRW.

[[Mod. note -- I think this is on the edge of our newsgroup ban on

"excessively speculative" submissions, but clearly *something* odd

is going on.

I wonder if it could be "just" non-uniformity in the Hubble flow

in the region of the "local" measurements?

-- jt]]

Nov 11, 2019, 5:36:21 PM11/11/19

to

In article <5dbd814d$0$10260$e4fe...@news.xs4all.nl>,

Attendance was far higher than that. The video shows only one side

of the main floor of the room, and the other side is far more popular

(perhaps because it has a better view of the screen). There's a

balcony as well, and quite a few people leave at the end of the talk

and before the question period. I didn't count, but I think the

attendance was close to 100. Anyway it was about the normal number

for a colloquium here.

The colloquium list for the fall is at

https://www.cfa.harvard.edu/colloquia

if you want to see what other topics have been covered.

To the question in another message, I don't see why some local

perturbation -- presumably abnormally low matter density around our

location -- wouldn't solve the problem in principle, but if this were

a viable explanation, I expect the speaker would have mentioned it.

It's not as though no one has thought about the problem. The

difficulty is probably the magnitude of the effect. I don't work in

this area, though, so my opinion is not worth much.

--

Help keep our newsgroup healthy; please don't feed the trolls.

Steve Willner Phone 617-495-7123 swil...@cfa.harvard.edu

Cambridge, MA 02138 USA

[[Mod. note -- I apologise for the delay in posting this article,

which was submitted on Fri, 8 Nov 2019 21:15:25 +0000.

-- jt]]

Attendance was far higher than that. The video shows only one side

of the main floor of the room, and the other side is far more popular

(perhaps because it has a better view of the screen). There's a

balcony as well, and quite a few people leave at the end of the talk

and before the question period. I didn't count, but I think the

attendance was close to 100. Anyway it was about the normal number

for a colloquium here.

The colloquium list for the fall is at

https://www.cfa.harvard.edu/colloquia

if you want to see what other topics have been covered.

To the question in another message, I don't see why some local

perturbation -- presumably abnormally low matter density around our

location -- wouldn't solve the problem in principle, but if this were

a viable explanation, I expect the speaker would have mentioned it.

It's not as though no one has thought about the problem. The

difficulty is probably the magnitude of the effect. I don't work in

this area, though, so my opinion is not worth much.

--

Help keep our newsgroup healthy; please don't feed the trolls.

Steve Willner Phone 617-495-7123 swil...@cfa.harvard.edu

Cambridge, MA 02138 USA

which was submitted on Fri, 8 Nov 2019 21:15:25 +0000.

-- jt]]

Nov 12, 2019, 3:39:06 PM11/12/19

to

In article <qq4ltd$31f$1...@dont-email.me>, Steve Willner

constant the same in all directions on the sky? (I remember Sandage

saying that even Hubble had found that it was, but I mean today, with

much better data, where small effects are noticeable.) If it is, then

such a density variation could be an explanation (assuming that it would

otherwise work) only if we "just happened" to be sitting at the centre

of such a local bubble.

Of course, some of us remember when the debate was not between 67 and

72, but between 50 and 100, with occasional suggestions of 42 (really)

or even 30. And both the "high camp" and "low camp" claimed

uncertainties of about 10 per cent. That wasn't a debate over whether

one used "local" or "large-scale" methods to measure it, but rather the

deference depended on who was doing the measuring. Nevertheless, it is

conceivable that there is some unknown systematic uncertainty* in one of

the measurements.

---

* For some, "unknown systematic uncertainty" is a tautology. Others,

however, include systematic uncertainties as part of the uncertainty

budget. (Some people use "error" instead of "uncertainty". The latter

is, I think, more correct, though in this case perhaps some unknown

ERROR is the culprit.

<wil...@cfa.harvard.edu> writes:

> To the question in another message, I don't see why some local

> perturbation -- presumably abnormally low matter density around our

> location -- wouldn't solve the problem in principle, but if this were

> a viable explanation, I expect the speaker would have mentioned it.

> It's not as though no one has thought about the problem. The

> difficulty is probably the magnitude of the effect. I don't work in

> this area, though, so my opinion is not worth much.

I'm sure that someone must have looked at it, but is the measured Hubble
> To the question in another message, I don't see why some local

> perturbation -- presumably abnormally low matter density around our

> location -- wouldn't solve the problem in principle, but if this were

> a viable explanation, I expect the speaker would have mentioned it.

> It's not as though no one has thought about the problem. The

> difficulty is probably the magnitude of the effect. I don't work in

> this area, though, so my opinion is not worth much.

constant the same in all directions on the sky? (I remember Sandage

saying that even Hubble had found that it was, but I mean today, with

much better data, where small effects are noticeable.) If it is, then

such a density variation could be an explanation (assuming that it would

otherwise work) only if we "just happened" to be sitting at the centre

of such a local bubble.

Of course, some of us remember when the debate was not between 67 and

72, but between 50 and 100, with occasional suggestions of 42 (really)

or even 30. And both the "high camp" and "low camp" claimed

uncertainties of about 10 per cent. That wasn't a debate over whether

one used "local" or "large-scale" methods to measure it, but rather the

deference depended on who was doing the measuring. Nevertheless, it is

conceivable that there is some unknown systematic uncertainty* in one of

the measurements.

---

* For some, "unknown systematic uncertainty" is a tautology. Others,

however, include systematic uncertainties as part of the uncertainty

budget. (Some people use "error" instead of "uncertainty". The latter

is, I think, more correct, though in this case perhaps some unknown

ERROR is the culprit.

Nov 14, 2019, 3:41:18 PM11/14/19

to

On 11/2/19 3:50:25 AM, Steve Willner wrote:

> In article <qo590f$c75$1...@dont-email.me>, I wrote:

>> The upshot is that the discrepancy between the local and the CMB

>> measurements of H_0 is between 4 and 5.7 sigma, depending on how

>> conservative one wants to be about assumptions.

>

>

> In article <qo590f$c75$1...@dont-email.me>, I wrote:

>> The upshot is that the discrepancy between the local and the CMB

>> measurements of H_0 is between 4 and 5.7 sigma, depending on how

>> conservative one wants to be about assumptions.

>

>

> 3. while there have been several suggestion for new physics to fix

> the problem, none of them so far seems to work without disagreeing

> with other data.

>

> What fun!

>

In the question and answer period
> the problem, none of them so far seems to work without disagreeing

> with other data.

>

> What fun!

>

One person asked if the triple point of hydrogen may provide

insight to the problem of discrepancy between the local

and the CMB measurements of H_0.

The triple point of hydrogen is at 13.81 K 7.042 kPa.

Silvia Galli didn't provide an answer

other than many things are possible.

The questioning person's name was not given.

Can anyone provide some insight

into what the triple point of hydrogen

has anything to do with discrepancy between

the local and the CMB measurements of H_0?

Richard Saam

Jun 5, 2020, 2:41:30 PM6/5/20

to

>On 11/2/19 3:50:25 AM, Steve Willner wrote:

> In article <qo590f$c75$1...@dont-email.me>, I wrote:

>> The upshot is that the discrepancy between the local and the CMB

>> measurements of H_0 is between 4 and 5.7 sigma, depending on how

>> conservative one wants to be about assumptions.

>

>

> 3. while there have been several suggestion for new physics to fix

> the problem, none of them so far seems to work without disagreeing

> with other data.

>

> What fun!

>

The Ho data is tightening:
> In article <qo590f$c75$1...@dont-email.me>, I wrote:

>> The upshot is that the discrepancy between the local and the CMB

>> measurements of H_0 is between 4 and 5.7 sigma, depending on how

>> conservative one wants to be about assumptions.

>

>

> 3. while there have been several suggestion for new physics to fix

> the problem, none of them so far seems to work without disagreeing

> with other data.

>

> What fun!

>

**

Testing Low-Redshift Cosmic Acceleration with Large-Scale Structure

https://arxiv.org/abs/2001.11044

Seshadri Nadathur, Will J. Percival,

Florian Beutler, and Hans A. Winther

Phys. Rev. Lett. 124, 221301 - Published 2 June 2020

we measure the Hubble constant to be

Ho = 72.3 +/- 1.9 km/sec Mpc from BAO + voids

at z<2

and

Ho = 69.0 +/- 1.2 km/sec Mpc from BAO

when adding Lyman alpha at BAO at z=2.34

**

Richard D Saam

Jun 6, 2020, 7:33:23 PM6/6/20

to

In article <9rednSTb2_1FxEfD...@giganews.com>, "Richard D.

measurement is X with uncertainty A, and another Z with uncertainty

C, and they are 5 sigma apart, then someone measures, say, Y with

uncertainty B, which is between the other two and compatible with

both within 3 sigma, that doesn't mean that Y is correct. Of course,

if someone does measure that, they will probably publish it, while

someone measuring something, say, 5 sigma below the lowest measurement,

or above the highest, might be less likely to do so.

It could be that Y is close to the true value, but perhaps all are

wrong, or X is closer, or Z. The problem can be resolved only if

one understands why the measurements differ by more than a reasonable

amount.

Saam" <rds...@att.net> writes:

> The Ho data is tightening:

>

> **

> Testing Low-Redshift Cosmic Acceleration with Large-Scale Structure

> https://arxiv.org/abs/2001.11044

> Seshadri Nadathur, Will J. Percival,

> Florian Beutler, and Hans A. Winther

> Phys. Rev. Lett. 124, 221301 - Published 2 June 2020

> we measure the Hubble constant to be

> Ho = 72.3 +/- 1.9 km/sec Mpc from BAO + voids

> at z<2

>

> and

>

> Ho = 69.0 +/- 1.2 km/sec Mpc from BAO

> when adding Lyman alpha at BAO at z=2.34

> **

I guess it depends on what you mean by "tightening". If one
> The Ho data is tightening:

>

> **

> Testing Low-Redshift Cosmic Acceleration with Large-Scale Structure

> https://arxiv.org/abs/2001.11044

> Seshadri Nadathur, Will J. Percival,

> Florian Beutler, and Hans A. Winther

> Phys. Rev. Lett. 124, 221301 - Published 2 June 2020

> we measure the Hubble constant to be

> Ho = 72.3 +/- 1.9 km/sec Mpc from BAO + voids

> at z<2

>

> and

>

> Ho = 69.0 +/- 1.2 km/sec Mpc from BAO

> when adding Lyman alpha at BAO at z=2.34

> **

measurement is X with uncertainty A, and another Z with uncertainty

C, and they are 5 sigma apart, then someone measures, say, Y with

uncertainty B, which is between the other two and compatible with

both within 3 sigma, that doesn't mean that Y is correct. Of course,

if someone does measure that, they will probably publish it, while

someone measuring something, say, 5 sigma below the lowest measurement,

or above the highest, might be less likely to do so.

It could be that Y is close to the true value, but perhaps all are

wrong, or X is closer, or Z. The problem can be resolved only if

one understands why the measurements differ by more than a reasonable

amount.

Reply all

Reply to author

Forward

0 new messages

Search

Clear search

Close search

Google apps

Main menu