Google Groups no longer supports new Usenet posts or subscriptions. Historical content remains viewable.
Dismiss

Luminosity Fitting Friedmann equation and SNLS data

75 views
Skip to first unread message

Nicolaas Vroom

unread,
Mar 15, 2012, 2:18:31 PM3/15/12
to
One of the most interesting observations are what is called SNLS data
This is a table of values of the magnitude of Super Novae versus z
values.
Using that data it is in principle possible to calculate the
parameters
of the Friedmann equation.
That is what I have done in the last 3 month.
The results are at:
http://users.telenet.be/nicvroom/friedmann's equation.htm
One of the major difficulties is what is called the Flux Luminosity
relation.
This relation in its simplest form looks like: F=L/(4*pi*d*d)
Using K= 10 Log (F) and m=-2.5*K its is possible to make the link
between
magnitude and distance. However also different relations are possible.
In the document 7 different one's are discussed.
F/L relation #7 gives the smallest error between theory and
observation.
The question is that one also the best.

The general conclusion is that the parameters of the Friedmann
equation.
are very difficult to calculate.
One difficult one is parameter C which is responsible how far back
you can observe in the past. In order to observe the Big Bang C has to
to be large.
Also the parameter k which only can have 3 different values (-1,0,+1)
is difficult to calculate and depents very much on the F/L relation
selected.
All in all this is a very interesting project.

Nicolaas Vroom

Nicolaas Vroom

unread,
Mar 16, 2012, 3:13:45 AM3/16/12
to
On 15 mrt, 19:18, Nicolaas Vroom <nicolaas.vr...@pandora.be> wrote:
> The results are at:http://users.telenet.be/nicvroom/friedmann's equation.htm

Please try this link:
http://users.telenet.be/nicvroom/friedmann's%20equation.htm

Eric Flesch

unread,
Mar 17, 2012, 2:05:59 PM3/17/12
to
On Thu, 15 Mar 12, Nicolaas Vroom wrote:
>One of the most interesting observations are what is called SNLS data
>This is a table of values of the magnitude of Super Novae versus z
> ... All in all this is a very interesting project.

Watch out for adjusted data. I did some investigation of SNe data
about 10 years ago, but found that the presented light curves were all
*after* redshift had been removed, and some other adjustments also
like K-corrections. It turned out the raw data simply wasn't
available. I think today the "gold" datasets etc do present the raw
data, but I'm not absolutely certain of it -- not looking at them
anymore -- so I advise you to be certain of the provenance of all your
data so that you interpret them correctly.

Eric

Phillip Helbig---undress to reply

unread,
Mar 20, 2012, 9:41:52 AM3/20/12
to
In article <mt2.0-18769...@hydra.herts.ac.uk>, Nicolaas Vroom
<nicolaa...@pandora.be> writes:

> One of the most interesting observations are what is called SNLS data
> This is a table of values of the magnitude of Super Novae versus z
> values.
> Using that data it is in principle possible to calculate the
> parameters
> of the Friedmann equation.

Indeed, and this has been known for almost 100 years. What has changed
is that good data are not available.

> That is what I have done in the last 3 month.
> The results are at:
> http://users.telenet.be/nicvroom/friedmann's equation.htm
> One of the major difficulties is what is called the Flux Luminosity
> relation.
> This relation in its simplest form looks like: F=L/(4*pi*d*d)
> Using K= 10 Log (F) and m=-2.5*K its is possible to make the link
> between
> magnitude and distance.

What is difficult about it?

> However also different relations are possible.
> In the document 7 different one's are discussed.
> F/L relation #7 gives the smallest error between theory and
> observation.
> The question is that one also the best.

It is possible to get a GOOD fit (i.e. what one would expect giving the
error bars) with the standard Friedmann-Lemaître equation, with Omega of
about 0.27 and lambda of about 0.73. Many other independent tests also
indicate these values and there is no reason to doubt them. Of course,
with a more complicated relation one could fit the data better but,
given the error bars, one actually does not want the fit to be TOO good.
Also, any other relation must be physically motivated.

As long as the standard cosmology gives a good fit with parameter values
which do not conflict with other observations there is no reason to
doubt that traditional cosmology describes our universe.

Phillip Helbig---undress to reply

unread,
Mar 21, 2012, 3:23:15 AM3/21/12
to
In article <mt2.0-24203...@hydra.herts.ac.uk>, Phillip
Helbig---undress to reply <hel...@astro.multiCLOTHESvax.de> writes:

> In article <mt2.0-18769...@hydra.herts.ac.uk>, Nicolaas Vroom
> <nicolaa...@pandora.be> writes:
>
> > One of the most interesting observations are what is called SNLS data
> > This is a table of values of the magnitude of Super Novae versus z
> > values.
> > Using that data it is in principle possible to calculate the
> > parameters
> > of the Friedmann equation.
>
> Indeed, and this has been known for almost 100 years. What has changed
> is that good data are not available.

That should read "good data are NOW available". :-|

NotI

unread,
Mar 23, 2012, 2:23:54 PM3/23/12
to
On Wednesday, 21 March 2012 07:23:15 UTC, Phillip Helbig---undress to reply wrote:
> In article <mt2.0-24203...@hydra.herts.ac.uk>, Phillip
> Helbig---undress to reply <hel...@astro.multiCLOTHESvax.de> writes:
>
> > In article <mt2.0-18769...@hydra.herts.ac.uk>, Nicolaas Vroom
> > <nicolaa...@pandora.be> writes:
> >
> > > One of the most interesting observations are what is called SNLS data
> > > This is a table of values of the magnitude of Super Novae versus z
> > > values.
> > > Using that data it is in principle possible to calculate the
> > > parameters
> > > of the Friedmann equation.

>
> That should read "good data are NOW available". :-|

Yes, but unfortunately I don't think the latest release of SNLS data qualifies. I got good fits, and about the same cosmological parameters you cite (Omega=0.29, as has everyone else) using the Union compilation (Kowalski et al., 2008), which contains supernova data from different sources, prepared in as uniform a manner as possible, from which 307 Type 1A supernovae pass usability tests. But when I tried to bolt in the latest SNLS release, I found that many data points are way off the curve, and that I c
an't discard them as outliers because they are part of a broad spread and because the given error margins are tiny --- completely unrealistic. The papers I read contained huge amount of waffle, mostly designed to hide what one actually wanted to know, but if I understood any of it and recall correctly, I think they got a value Omeqa ~= 0.18 --- way off that given by previous studies. Quite frankly, right now, I think the leaders of this project should be kicking the butt of the people who prepared the data
and did the analysis into another galaxy, rather than allow any of this into the public domain.

all the best

CF

Nicolaas Vroom

unread,
Mar 26, 2012, 2:31:59 PM3/26/12
to
On 20 mrt, 15:41, Phillip Helbig---undress to reply
<hel...@astro.multiCLOTHESvax.de> wrote:
> In article <mt2.0-18769-1331835...@hydra.herts.ac.uk>, Nicolaas Vroom
>
> <nicolaas.vr...@pandora.be> writes:
> > SNIP
> Indeed, and this has been known for almost 100 years.
> What has changed is that good data are now available.

I expect what you mean is that much more accurate data is NOW
available.

[Mod. note: please read the whole thread before replying -- mjh]

> > The results are at:
> > http://users.telenet.be/nicvroom/friedmann's%20equation.htm
> > One of the major difficulties is what is called the Flux Luminosity
> > relation.
> > SNIP
>
> What is difficult about it?
>
One difficulty is to find the correct Flux Luminosity relation.
The other difficulty is to calculate the optimum values of the
parameters C, Lambda k (and age) which gives the minimum error between
theory (Friedmann equation) and observation (SNLS data)

>
> It is possible to get a GOOD fit (i.e. what one would expect giving the
> error bars) with the standard Friedmann-Lemaître equation, with Omega of
> about 0.27 and lambda of about 0.73.

As your suggestion I have recalculated error values for lambda
between 0 and 1.1 with C=60 and k=0 for all 7 F/L relations.

The major difference are with F/L 1 : (1/d^2) and F/L 3 (1/d^2*(1+z).
In both cases I found new minimum error values for roughly Lambda=0.9
However both show the same behaviour starting from roughly Lambda=0.7
That is: the error value is almost flat implying a large error margin.
For F/L 4 I found a small larger Lambda value.

What this means is that the most probable value for Lambda is much
smaller than 0.73.

A different isue is the calculation of omega.
Accordingly to "Ray d"Inverno"
rhoc = 3*H0^2/8pi H0=H at t=0 with Lambda=0
The parameter omega is not mentioned.
Accordngly http://en.wikipedia.org/wiki/Friedmann_equations
rhoc = 3*H^2/8pi*G
omega = rho/rhoc
does that mean that omega= H^2/H0^2 ?

How important is the parameter omega ?
IMO omega is not a parameter of the Friedmann equation

> As long as the standard cosmology gives a good fit with parameter values
> which do not conflict with other observations there is no reason to
> doubt that traditional cosmology describes our universe.

You mention that Lambda = 0.73
My investigations show that Lambda = 0.1 (Roughly)
with a much smaller error value.

Nicolaas Vroom

Phillip Helbig---undress to reply

unread,
Mar 28, 2012, 4:42:23 AM3/28/12
to
In article <mt2.0-14006...@hydra.herts.ac.uk>, Nicolaas Vroom
<nicolaa...@pandora.be> writes:

> One difficulty is to find the correct Flux Luminosity relation.
> The other difficulty is to calculate the optimum values of the
> parameters C, Lambda k (and age) which gives the minimum error between
> theory (Friedmann equation) and observation (SNLS data)

It sounds like you allow arbitrary relations and try to minimize the
error. What you should do is take the physically motivated equations
and fit for the parameters. This is standard chi-squared fitting. If
you don't get an acceptable fit, then that indicates a problem
somewhere, but that is not the case.

> The major difference are with F/L 1 : (1/d^2) and F/L 3 (1/d^2*(1+z).

You can't just try different forms with no physical motivation.

> rhoc = 3*H0^2/8pi H0=H at t=0 with Lambda=0
> The parameter omega is not mentioned.
> Accordngly http://en.wikipedia.org/wiki/Friedmann_equations
> rhoc = 3*H^2/8pi*G
> omega = rho/rhoc

Right (individually).

> does that mean that omega= H^2/H0^2 ?

No.

In general, the cosmological parameters vary with time. An index "0"
means the current value.

> How important is the parameter omega ?

It is one of the main cosmological parameters.

> IMO omega is not a parameter of the Friedmann equation

Of course it is.

> > As long as the standard cosmology gives a good fit with parameter values
> > which do not conflict with other observations there is no reason to
> > doubt that traditional cosmology describes our universe.
>
> You mention that Lambda = 0.73
> My investigations show that Lambda = 0.1 (Roughly)
> with a much smaller error value.

When literally hundreds of papers are converging on Lambda = 0.73, and
several doing so with the supernovae data, you first need to understand
what they are doing and what you are doing (wrong).

Nicolaas Vroom

unread,
Mar 28, 2012, 4:43:07 AM3/28/12
to
On 23 mrt, 20:23, NotI <n...@charlesfrancis.wanadoo.co.uk> wrote:
> On Wednesday, 21 March 2012 07:23:15 UTC, Phillip Helbig---undress to reply wrote:
>
> > That should read "good data are NOW available".
>
> Yes, but unfortunately I don't think the latest release of SNLS data qualifies.
> I got good fits, and about the same cosmological parameters you cite
> (Omega=0.29, as has everyone else) using the Union compilation (Kowalski et al., 2008),
> which contains supernova data from different sources, prepared in as uniform
> a manner as possible, from which 307 Type 1A supernovae pass usability tests.

I expect the document you mean is this one:
http://iopscience.iop.org/0004-637X/686/2/749/pdf/0004-637X_686_2_749.pdf
"Improved Cosmological Constraints from New, Old, and Combined
Supernova Data Sets" By M Kowalski et al.
At page 758 is written:
"The flux of each supernova data point is then rescaled according to
the
ratio of luminosity distances obtained from the fitted parameters and
arbitrarily chosen dummy parameters (in this case Omega M = 0.25,
Omega Lambda = 0:75)."

In a previuous posting Phillip Helbig wrote:
> It is possible to get a GOOD fit (i.e. what one would expect giving the
> error bars) with the standard Friedmann-Lemaître equation, with Omega of
> about 0.27 and lambda of about 0.73.
That means he meant Omega matter = 0.27 and Omega Lambda = 0.73.
Those parameters are quite different as the parameter Lambda
included in the Friedmann equation (together with C and k)
For a definition of omega see:
http://www.jb.man.ac.uk/~jpl/cosmo/friedman.html

> But when I tried to bolt in the latest SNLS release, I found that many
> data points are way off the curve, and that I can't discard them

Figure 10 at page 766 shows a Binned Hubble diagram with a list of
(approx 10) articles on which the diagram is based.
I expect that this same data is also included in the SNLS data.
If true than why this above mentioned discrepancy ?

> all the best
>
> CF

Nicolaas Vroom

Nicolaas Vroom

unread,
Mar 28, 2012, 5:04:21 PM3/28/12
to
On 28 mrt, 10:42, Phillip Helbig---undress to reply
<hel...@astro.multiCLOTHESvax.de> wrote:
>
> When literally hundreds of papers are converging on Lambda = 0.73, and
> several doing so with the supernovae data, you first need to understand
> what they are doing and what you are doing (wrong).

The problem is I expect you mean omega(Lamba) = 0.73 and omega(M) =
0.27

I have already mentioned this in a different posting.

As a result of this miscommunication I have added a special question
which discusses omega.
See: http://users.telenet.be/nicvroom/friedmann's%20equation.htm#Q9.1

In the table you see that for Lambda = 0 and for k = -1 omega(Lambda)
= 0.76
You almost get the same value for Lambda = 0.006 and k =0
omega(Lambda) = 0.75
However this exercise does not use any SNLS data (error value)
The question is why can not we select a larger Lambda value which
gives
a smaller error value ?

Nicolaas Vroom

Eric Flesch

unread,
Mar 29, 2012, 3:31:16 AM3/29/12
to
There is a new preprint out today, astro-ph/1203.6269 "Cosmological
constraints from supernova data set with corrected redshift" by Feoli
et al, which will be of interest to all. Basically they start with
the "Union" SNe dataset and re-analyze it from first precepts. They
find that the cosmological parameters are *very* sensitive to how you
fit the curve, and that in fact the currently-popular values of OmegaM
etc are far overbought.

Note their fascinating Figure 1 Hubble diagram which has a data
artefact not well publicized, not even in this paper: the main data
do not follow the curve in 0.1<z<0.2, but veer stoutly toward the
ordinal. It looks like a point of inversion around z=0.1 which is
totally unmodelled. Food for thought.

Eric

Phillip Helbig---undress to reply

unread,
Mar 29, 2012, 8:00:52 AM3/29/12
to
In article <mt2.0-18734...@hydra.herts.ac.uk>, Nicolaas Vroom
<nicolaa...@pandora.be> writes:

> The problem is I expect you mean omega(Lamba) = 0.73 and omega(M) =
> 0.27

Yes. There is not a consistent notation. Some people use Omega_matter
and Omega_lambda, some use Omega and lambda, some use Omega_total which
is the some of the other two. Only two of these are independent.
However, when lambda is used, the other parameter a) is usually Omega
and b) almost always means Omega_matter.

Phillip Helbig---undress to reply

unread,
Mar 30, 2012, 3:27:01 PM3/30/12
to
In article <mt2.0-9255...@hydra.herts.ac.uk>, Phillip
Helbig---undress to reply <hel...@astro.multiCLOTHESvax.de> writes:

> Yes. There is not a consistent notation. Some people use Omega_matter
> and Omega_lambda, some use Omega and lambda, some use Omega_total which
> is the some of the other two.


SUM of the other two, of course!

Nicolaas Vroom

unread,
Apr 4, 2012, 9:12:20 AM4/4/12
to
Op donderdag 29 maart 2012 09:31:16 UTC+2 schreef Eric Flesch het volgende:
> There is a new preprint out today, astro-ph/1203.6269 "Cosmological
> constraints from supernova data set with corrected redshift" by Feoli
> et al, which will be of interest to all. etc. They
> find that the cosmological parameters are *very* sensitive to how you
> fit the curve, and that in fact the currently-popular values of OmegaM
> etc are far overbought.
>
> Eric

I have studied the same document which is at:
http://arxiv.org/pdf/1203.6269v1.pdf
Their results are an omega(M) of resp: 0.4, 0.7 and 1
Which means omega(L) of resp: 0.6 0.3 and 0
When you study the results in Table 8
See: http://users.telenet.be/nicvroom/friedmann's%20equation.htm#Q9.1
you will see that my results are close to the last two values,
which mean that they depend very much about the F/L curve selected.
It should be mentioned that my results depend about 208 equally spaced
points along the curve mentioned in the SNLS document and not about the
original measurements which are highly biased towards certain regions.

Hopes this helps

Nicolaas Vroom.

Phillip Helbig---undress to reply

unread,
Apr 5, 2012, 2:51:58 AM4/5/12
to
In article <mt2.0-9617...@hydra.herts.ac.uk>, Nicolaas Vroom
<nicolaa...@pandora.be> writes:

> > There is a new preprint out today, astro-ph/1203.6269 "Cosmological

> http://arxiv.org/pdf/1203.6269v1.pdf

Just a couple of technical points (I might comment on the article after
I have read it). First, with the new xxxx.xxxx numbering scheme, the
category, e.g. astro-ph, is no longer needed; the number itself is a
unique identifier. Part of the reason for this change is that it allows
the category to be changed without changing the number (and indeed some
papers are in more than one category at the same time, though I believe
that there is always a main category). The standard citation scheme
then produces arXiv:1203.6269 for the reference above. Also, if one
wants a direct link, one should link to the abstract, not to the PDF.
First, perhaps not everyone wants PDF. Second, many people would like
to read the abstract before accessing the PDF or whatever, especially on
a slow connection (the abstract usually mentions how many pages, figures
etc), or perhaps just the abstract, at least initially. Third, linking
to the PDF is documented to not always work (it might work for you now,
but that does not mean it will always work for everyone). Thus, in this
case: http://arxiv.org/abs/1203.6269 .

Nevertheless, having just read the abstract to test the link above, let
me mention a few things:

o Since their analysis results in very non-standard results, it
seems strange that they limit their analysis to a flat universe;
what would be the result of dropping this constraint?

o One interesting thing about the Nobel-Prize--winning supernovae
results is that two teams independently got the same result.

o Other cosmological tests also converge on these values, so one has
to explain what is wrong with the supernovae data or, if the
authors actually believe their result (which the abstract hints
at), what is wrong with essentially all other cosmological tests.

o This is from "Journal of Physics: Conference Series"; although at
http://iopscience.iop.org/1742-6596/354/1/011002 one can read
about the fact that the contributions have been refereed, even
this statement leaves open the question whether the standards for
proceedings are the same as for "proper" journals. While I think
that proceedings shouldn't have the same standards as "proper"
journals, if their result is true then it is important enough to
appear in a "proper" journal and might benefit (positively or
negatively) from more strict refereeing.

o "In particular we are interested in verifying if the Einstein-de
Sitter model of the expanding Universe is really to be ruled out."
This sounds like an axe begging to be ground. The Einstein-de
Sitter model has been ruled out by essentially every cosmological
test which is able to discriminate between it and, say, the
current "standard model". This strengthens my requirement in my
third point above.

Again, more after I have read the paper.

Nicolaas Vroom

unread,
Apr 5, 2012, 12:16:24 PM4/5/12
to
Op donderdag 5 april 2012 08:51:58 UTC+2 schreef Phillip Helbig---undress to reply het volgende:
>
> o "In particular we are interested in verifying if the Einstein-de
> Sitter model of the expanding Universe is really to be ruled out."
> This sounds like an axe begging to be ground. The Einstein-de
> Sitter model has been ruled out by essentially every cosmological
> test which is able to discriminate between it and, say, the
> current "standard model". This strengthens my requirement in my
> third point above.
>

In the in previous posts mentioned book by d'Inverno
at page 335 he makes a distinction between 9 subcases.
In order to classify each sub case he uses the
parameter Lambda (L) >0 , =0 and <0
and the parameter k = -1, 0 and +1
1) L>0 and k=-1 2) L=0 and k=-1 3) L<0 and k=-1
4) L>0 and k=0 5) L=0 and k=0 6) L<0 and k=0
7) L>0 and k=+1 8) L=0 and k=+1 9) L<0 and k=+1
Of those 9 subcases immediate 5 subcases drop off:
Those with k=+1 (#7,#8 and #9) and
those with Lambda <0 (#3,#6 and #9)
because the errors
involved using SNLS data is rather large.
As such 4 subcases are left over i.e.
#1, #2, #4 and #5
subcase #5 is called Einstein de Sitter.
Flat space is defined with k=0 (page 331)
The standard model is defined with L=0 (page 341)

In subcase #2 with L=0 and k=-1 the value of omega(lambda)
computed = 0.76
The same value of omega(lambda) = 0.76 is also computed
in sub case#4 with L=0.06 and k=0.
The error values computed in both cases is rather large.
The smallest error value are computed in #1 and #2
with larger values of Lambda
and smaller values of omega(Lambda)
however the distinction between those two is not easy.
Those error values depend which F/L relation is selected.

This means that #5 is almost ruled out.

Nicolaas Vroom
http://users.pandora.be/nicvroom
Select: Friedmann equation.

Eric Flesch

unread,
Apr 5, 2012, 4:37:09 PM4/5/12
to
On Thu, 05 Apr 12, Phillip Helbig wrote:
> o Since their analysis results in very non-standard results, it
> seems strange that they limit their analysis to a flat universe;
> what would be the result of dropping this constraint?

Hear, hear! Indeed, the flat universe is the Great Turtle on top of
which the entire Standard Model rests. To drop this constraint means
bathwater and babies all out the window, good-bye OmegaM and the
expansion of space, etc. Is that a good idea? Sounds good to me.

> o One interesting thing about the Nobel-Prize--winning supernovae
> results is that two teams independently got the same result.

There was a similar story a while back, I can't quite place it. Two
independent teams got the same wrong answer. Investigation showed
that there had been under-the-table communication between them. After
all, who wants to look foolish? Imagine your team published only to
be immediately refuted by a better result by the other team.

> o Other cosmological tests also converge on these values,

Oh come on, Phil, that also happened pre-1998 during the
"critical-mass" years. Maybe astronomers should not feel so compelled
to echo the latest models in their papers, and breathe the free air
instead. Wouldn't that be good?

Eric

Phillip Helbig---undress to reply

unread,
Apr 5, 2012, 4:38:14 PM4/5/12
to
> In the in previous posts mentioned book by d'Inverno
> at page 335 he makes a distinction between 9 subcases.
> In order to classify each sub case he uses the
> parameter Lambda (L) >0 , =0 and <0
> and the parameter k = -1, 0 and +1
> 1) L>0 and k=-1 2) L=0 and k=-1 3) L<0 and k=-1
> 4) L>0 and k=0 5) L=0 and k=0 6) L<0 and k=0
> 7) L>0 and k=+1 8) L=0 and k=+1 9) L<0 and k=+1

OK, this is just everything which is physically possible. Standard
cosmological texts on the classification of cosmological models
distinguish between 19 cases by distinguishing between being on one of
the lines (e.g. k=0) or on one side of them, whether the universe will
expand forever, whether it had a big bang, whether it is empty etc.

> Of those 9 subcases immediate 5 subcases drop off:
> Those with k=+1 (#7,#8 and #9) and
> those with Lambda <0 (#3,#6 and #9)
> because the errors
> involved using SNLS data is rather large.

This can't be right if it implies what you write. Any observable
quantity is a smooth function of lambda and Omega. One can't rule out
lambda < 0 but not lambda = 0 since if lambda is slightly less than 0
this will be within the errors.

> As such 4 subcases are left over i.e.
> #1, #2, #4 and #5
> subcase #5 is called Einstein de Sitter.
> Flat space is defined with k=0 (page 331)
> The standard model is defined with L=0 (page 341)

OK.

> In subcase #2 with L=0 and k=-1 the value of omega(lambda)
> computed = 0.76
> The same value of omega(lambda) = 0.76 is also computed
> in sub case#4 with L=0.06 and k=0.

And if we move from k<1 to k=0 things are OK but then on the other side
of the line at k>0 it is ruled out?

> The error values computed in both cases is rather large.
> The smallest error value are computed in #1 and #2
> with larger values of Lambda
> and smaller values of omega(Lambda)

The interesting question is which cosmological models are compatible
with the data.

> This means that #5 is almost ruled out.

That the Einstein-de Sitter model is ruled out has been known for well
over a decade.

Phillip Helbig---undress to reply

unread,
Apr 6, 2012, 4:40:51 AM4/6/12
to
In article <mt2.0-6134...@hydra.herts.ac.uk>, Eric Flesch
<er...@flesch.org> writes:

> On Thu, 05 Apr 12, Phillip Helbig wrote:
> > o Since their analysis results in very non-standard results, it
> > seems strange that they limit their analysis to a flat universe;
> > what would be the result of dropping this constraint?
>
> Hear, hear! Indeed, the flat universe is the Great Turtle on top of
> which the entire Standard Model rests. To drop this constraint means
> bathwater and babies all out the window, good-bye OmegaM and the
> expansion of space, etc.

??? Actually, a flat universe, or an almost-flat universe (within the
errors) is what the data are telling us. I have no qualms with that. I
find it strange, though, when someone who presents highly unorthodox
results chooses to retain some constraints (which are often effectively
the results of analyses with which he disagrees). The standard model is
not a hypothesis, but rather the result of observations. It is not an
assumption, it is a conclusion. So, in that sense, finding evidence
against flatness would indeed conflict with the standard model, but a)
this can't be found if one assumes it and b) this has NOTHING to do with
saying good-bye to Omegam and the expansion of space.

> > o One interesting thing about the Nobel-Prize--winning supernovae
> > results is that two teams independently got the same result.
>
> There was a similar story a while back, I can't quite place it. Two
> independent teams got the same wrong answer. Investigation showed
> that there had been under-the-table communication between them. After
> all, who wants to look foolish? Imagine your team published only to
> be immediately refuted by a better result by the other team.

This was definitely NOT the case here. Also, note that the result was
UNEXPECTED.

> > o Other cosmological tests also converge on these values,
>
> Oh come on, Phil, that also happened pre-1998 during the
> "critical-mass" years. Maybe astronomers should not feel so compelled
> to echo the latest models in their papers, and breathe the free air
> instead. Wouldn't that be good?

No. Observations never indicated Omega=1. Read the literature which
actually looks at observations, from Gott, Gunn, Schramm & Tinsley up
through Coles and Ellis. No observational evidence in favour of Omega=1
as opposed to, say, 0.3. None. Yes, some results had such large error
bars that they were COMPATIBLE with Omega=1, but also with Omega=2 or
Omega=0.3. Some rather involved schemes determined Omega=1 from a
simulation with Omega=1 but actually they would also have got Omega=1
from a simulation with Omega=0.3, but didn't bother to actually test it.

A few years before the supernovae stuff, COMBINATIONS of cosmological
tests pointed to what is now the standard model. The interesting thing
about the supernovae results is that they in themselves rule out a
universe which is not accelerating. Yes, there were some theorists who
claimed that inflation (which still hasn't been proven to exist; I'm not
claiming it didn't, merely that it is not proven in any meaningful
sense) required OmegaM=1, even when observations indicated something
else. The last I heard, even they are quiet now.

Nicolaas Vroom

unread,
Apr 6, 2012, 9:36:15 AM4/6/12
to
Op donderdag 5 april 2012 22:38:14 UTC+2 schreef Phillip Helbig---undress to reply het volgende:

> > Of those 9 subcases immediate 5 subcases drop off:
> > Those with k=+1 (#7,#8 and #9) and
> > those with Lambda <0 (#3,#6 and #9)
> > because the errors
> > involved using SNLS data is rather large.
>
> This can't be right if it implies what you write. Any observable
> quantity is a smooth function of lambda and Omega. One can't rule out
> lambda < 0 but not lambda = 0 since if lambda is slightly less than 0
> this will be within the errors.

The only subcase which should not be ruled out is #7 for "large" values
of Lambda with k=+1

> > In subcase #2 with L=0 and k=-1 the value of omega(lambda)
> > computed = 0.76
> > The same value of omega(lambda) = 0.76 is also computed
> > in sub case#4 with L=0.006 (Modified ! was 0.06) and k=0.
>
> And if we move from k<1 to k=0 things are OK but then on the other side
> of the line at k>0 it is ruled out?

See my remark above.
The error value in subcase #2 = 0,00189 (Table 13)
The error value in subcase #4 = 0,00125 (Table 14)
There is also an omega(Lambda) = 0.766 available
in subcase #7 with L = 0.012 and k=+1
The error value in subcase #7 = 0,00074233

The important point is that in all those 3 cases
for larger values of Lambda smaller error values are possible.
For example:
The error value for L=0.02 and k=+1 is 0,00013452
All error values mentioned are calculated with F/L relation 5

Nicolaas Vroom
http://users.pandora.be/nicvroom

Nicolaas Vroom

unread,
Apr 7, 2012, 7:06:25 AM4/7/12
to
Op vrijdag 6 april 2012 10:40:51 UTC+2 schreef Phillip Helbig---undress to reply het volgende:
> In article <mt2.0-6134...@hydra.herts.ac.uk>, Eric Flesch
> <er...@flesch.org> writes:
>
> > Hear, hear! Indeed, the flat universe is the Great Turtle on top of
> > which the entire Standard Model rests. To drop this constraint means
> > bathwater and babies all out the window, good-bye OmegaM and the
> > expansion of space, etc.
>
> ??? Actually, a flat universe, or an almost-flat universe (within the
> errors) is what the data are telling us. I have no qualms with that. I
> find it strange, though, when someone who presents highly unorthodox
> results chooses to retain some constraints (which are often effectively
> the results of analyses with which he disagrees). The standard model is
> not a hypothesis, but rather the result of observations. It is not an
> assumption, it is a conclusion. So, in that sense, finding evidence
> against flatness would indeed conflict with the standard model, but a)
> this can't be found if one assumes it and b) this has NOTHING to do with
> saying good-bye to Omegam and the expansion of space.

Sorry to say but I find this text rather difficult to understand.
As I already wrote in the book by d'Inverno he considers:
1) flat space as k = 0 i.e Lambda>0 Lambda= 0 and Lambda<0
2) Standard model as Lambda = 0 i.e. k=-1, k=0 and k=+1
In fact there is already one combination with is both flat
and is in agreement with the standard model and that is the
combination Lambda=0 and k=0 (Einstein de Sitter).
At page 341 is mentioned:
"The three models with Lambda = 0 are called the standard models
and are the ones to which most attention is given today"
The results of my investigations show that the smallest errors
between theory (Friedmann equation) and observations (SNLS data)
are obtained with Lambda>0 and that omega(Lambda)<0.5.

See http://users.telenet.be/nicvroom/friedmann's%20equation.htm
Table 8

Those investigations are inconclusive if space is flat or not.
i.e. if k = -1, k = 0 or k = +1
(Assuming Lambda > 0)

Nicolaas Vroom

Phillip Helbig---undress to reply

unread,
Apr 7, 2012, 5:30:26 PM4/7/12
to
In article <mt2.0-22298...@hydra.herts.ac.uk>, Nicolaas Vroom
<nicolaa...@pandora.be> writes:

> Op vrijdag 6 april 2012 10:40:51 UTC+2 schreef Phillip Helbig---undress to reply het volgende:
> > In article <mt2.0-6134...@hydra.herts.ac.uk>, Eric Flesch
> > <er...@flesch.org> writes:
> >
> > > Hear, hear! Indeed, the flat universe is the Great Turtle on top of
> > > which the entire Standard Model rests. To drop this constraint means
> > > bathwater and babies all out the window, good-bye OmegaM and the
> > > expansion of space, etc.
> >
> > ??? Actually, a flat universe, or an almost-flat universe (within the
> > errors) is what the data are telling us. I have no qualms with that. I
> > find it strange, though, when someone who presents highly unorthodox
> > results chooses to retain some constraints (which are often effectively
> > the results of analyses with which he disagrees). The standard model is
> > not a hypothesis, but rather the result of observations. It is not an
> > assumption, it is a conclusion. So, in that sense, finding evidence
> > against flatness would indeed conflict with the standard model, but a)
> > this can't be found if one assumes it and b) this has NOTHING to do with
> > saying good-bye to Omegam and the expansion of space.
>
> Sorry to say but I find this text rather difficult to understand.
> As I already wrote in the book by d'Inverno he considers:

When was the book published.

> 1) flat space as k = 0 i.e Lambda>0 Lambda= 0 and Lambda<0

Right, k=0 is flat space.

> 2) Standard model as Lambda = 0 i.e. k=-1, k=0 and k=+1

This hasn't been the standard model for at least a dozen years. Today,
when people speak of the standard cosmological model, they almost always
mean the values of the cosmological parameters on which observations
have been converging for the past decade or so. See:

R. A. C. Croft & M. Dailey, MNRAS (submitted), arXiv:1112.3108

> In fact there is already one combination with is both flat
> and is in agreement with the standard model and that is the
> combination Lambda=0 and k=0 (Einstein de Sitter).
> At page 341 is mentioned:

Yes, it "exists" in a mathematical sense but is ruled out by
observations.

> "The three models with Lambda = 0 are called the standard models
> and are the ones to which most attention is given today"

It seems the book is severely out of date.

Jonathan Thornburg [remove -animal to reply]

unread,
Apr 7, 2012, 5:31:01 PM4/7/12
to
In article <mt2.0-6134...@hydra.herts.ac.uk>, Eric Flesch
<er...@flesch.org> writes:
> Indeed, the flat universe is the Great Turtle on top of
> which the entire Standard Model rests. [[...]]

Phillip Helbig---undress to reply <hel...@astro.multiclothesvax.de> wrote:
> A few years before the supernovae stuff, COMBINATIONS of cosmological
> tests pointed to what is now the standard model. [[...]]

When I first read Eric Flesch's words quoted above, I thought he was
talking about the standard model of elementary particle physics.
[And I was surprised at (what I thought was) the claim
that the flatness or non-flatness of the universe is a
fundamental piece of evidence used to figure out how
high-energy particle physics works. Then again, people do
indeed sometimes consider using cosmological constraints to
infer things about particle physics (e.g., neutrino masses).]

Now that I've read Phillip Helbig's reply, I think it's more likely
that he and Eric Flesch are actually discussing/debating the standard
*cosmological* model.

So... a small request: Given that "standard model" is a term of art
in multiple areas of physics, could we all try to deprecate the unadorned
phrase "standard model" in favor of less ambiguous qualified-phrases like
"standard model of particle physics" or "standard model of cosmology"?

thanks, ciao,

--
-- "Jonathan Thornburg [remove -animal to reply]" <jth...@astro.indiana-zebra.edu>
Dept of Astronomy & IUCSS, Indiana University, Bloomington, Indiana, USA
"Washing one's hands of the conflict between the powerful and the
powerless means to side with the powerful, not to be neutral."
-- quote by Freire / poster by Oxfam

Nicolaas Vroom

unread,
Apr 8, 2012, 1:40:20 PM4/8/12
to
Op zaterdag 7 april 2012 23:31:01 UTC+2 schreef Jonathan Thornburg [remove -animal to reply] het volgende:
> In article <mt2.0-6134...@hydra.herts.ac.uk>, Eric Flesch
> <er...@flesch.org> writes:
> > Indeed, the flat universe is the Great Turtle on top of
> > which the entire Standard Model rests. [[...]]
>
>
> Now that I've read Phillip Helbig's reply, I think it's more likely
> that he and Eric Flesch are actually discussing/debating the standard
> *cosmological* model.
>
> So... a small request: Given that "standard model" is a term of art
> in multiple areas of physics, could we all try to deprecate the unadorned
> phrase "standard model" in favor of less ambiguous qualified-phrases like
> "standard model of particle physics" or "standard model of cosmology"?
>
> thanks, ciao,

IMO we should try not to use the word "standard" or clearly
define what we mean.
As such we should speak about: cosmological parameters
the same as is done in the brilliant article:
http://arxiv.org/abs/1112.3108
with the title: On the measurement of cosmological parameters
That means we should not use the parameter omega but:
omaga(k), omega(L), or omega(M)
In that sense we should also speak about: cosmological models.
Or use Einstein de Sitter model with L=0 and k=0
Or use cosmological models with L=0

The book by Ray d'Inverno is from 1998.

The issue is that as a result of my calculations
the smallest errors are for cosmological models with
Lambda (= Cosmological Constant) > 0

Nicolaas Vroom
http://users.pandora.be/nicvroom/

Phillip Helbig---undress to reply

unread,
Apr 8, 2012, 3:13:44 PM4/8/12
to
In article <mt2.0-20264...@hydra.herts.ac.uk>, Nicolaas Vroom
<nicolaa...@pandora.be> writes:

> > > Indeed, the flat universe is the Great Turtle on top of
> > > which the entire Standard Model rests. [[...]]
> >
> > Now that I've read Phillip Helbig's reply, I think it's more likely
> > that he and Eric Flesch are actually discussing/debating the standard
> > *cosmological* model.

Indeed. :-)

> > So... a small request: Given that "standard model" is a term of art
> > in multiple areas of physics, could we all try to deprecate the unadorned
> > phrase "standard model" in favor of less ambiguous qualified-phrases like
> > "standard model of particle physics" or "standard model of cosmology"?

Good idea.

> IMO we should try not to use the word "standard" or clearly
> define what we mean.

Indeed. Even "standard cosmological model" means different things to
different people at the same time, and of course what is standard has
changed with time.

> As such we should speak about: cosmological parameters
> the same as is done in the brilliant article:
> http://arxiv.org/abs/1112.3108
> with the title: On the measurement of cosmological parameters

Yes, interesting article.

> That means we should not use the parameter omega but:
> omaga(k), omega(L), or omega(M)

As long as they are defined (usually the case in a paper but often not
in usenet posts), there is no ambiguity. Personally, I prefer lambda
and Omega to Omega_lambda and Omega_matter. One reason is that one
doesn't have to worry about subscripts, which can be missed even in a
properly set text and are awkward in a text-based medium such as usenet.
Also, in some contexts one has a subscript 0 to denote the present
value, so this means that one has two subscripts in such
cases---confusing! (In some cases, one might want to distinguish
between different types of matter, hence Omega_baryon, Omega_darkmatter,
Omega_dynamic depending on what it is or how it is detected.
Theoretically this could lead to even 3 subscripts, but usually when
discussing the various contributions to matter density one doesn't need
to distinguish between the current and past or future values in the
same context, especially since these all change in the same way.)
Also, lambda is fundamentally different than matter. With respect to
spatial curvature they are equivalent in the sense that the sum of
lambda and Omega determines the curvature, but even here lambda can be
negative while Omega can't. With respect to the expansion history they
are quite different since more Omega means more DEceleration and more
(positive) lambda means more ACceleration. As far as the geometry and
expansion history of the universe are concerned, lambda and Omega
(including all contributions from visible matter, baryonic matter, dark
matter etc) are useful parameters, although others have been used in the
literature.

> The book by Ray d'Inverno is from 1998.

Much has changed since then.

> The issue is that as a result of my calculations
> the smallest errors are for cosmological models with
> Lambda (= Cosmological Constant) > 0

This agrees with what most people take to be the best guess of the
values of the cosmological parameters: lambda=0.73 and Omega=0.27 giving
Omega+lambda=1 within the observational errors.

Nicolaas Vroom

unread,
Apr 9, 2012, 6:01:15 PM4/9/12
to
Op zondag 8 april 2012 21:13:44 UTC+2 schreef Phillip Helbig---undress to reply het volgende:
> In article <mt2.0-20264...@hydra.herts.ac.uk>, Nicolaas Vroom
> <nicolaa...@pandora.be> writes:
>
> > That means we should not use the parameter omega but:
> > omaga(k), omega(L), or omega(M)
>
> As long as they are defined (usually the case in a paper but often not
> in usenet posts), there is no ambiguity.

> > The issue is that as a result of my calculations
> > the smallest errors are for cosmological models with
> > Lambda (= Cosmological Constant) > 0
>
> This agrees with what most people take to be the best guess of the
> values of the cosmological parameters: lambda=0.73 and Omega=0.27 giving
> Omega+lambda=1 within the observational errors.

I think what you write is confusing.
I think what you mean is that:
Omega_L + omega_M + omega_K = 1
and that Omega_L = 0.73, omega_M = 0.27.
Also that Omega_k = 0, implying that k=0, but I'am
not completely sure about this.
With Lambda > 0 I mean the cosmological constant > 0
(i.e. the parameter of the friedmann equation)
and not omega_Lambda.

This document:
http://nicadd.niu.edu/~bterzic/PHYS652/Lecture_06.pdf
uses Omega(M0) + Omega(De0) = 1
and Lambda = Cosmological Constant (Dark Energy)
This document:
http://arxiv.org/pdf/1112.3108v1.pdf
uses omega(Lambda) and omega(m)
This document:
http://www.astro.ucla.edu/~wright/cosmo_constant.html
uses Omega(M) + Lambda = 1

If my assumption is correct that Omega_L = 0.73
and k=0 than my calculations show that Lambda = 0.06
(As I already have mentioned in previous postings)
The issue is that using those values in a comparision
between theory (Friedmann equation) and observation
(SNLS data) the errors involved are rather large.

Using larger values of Lambda and smaller values
of omega_L this error can be reduced.

Nicolaas Vroom
http://users.pandora.be/nicvroom/

Phillip Helbig---undress to reply

unread,
Apr 10, 2012, 5:21:41 PM4/10/12
to
In article <mt2.0-28039...@hydra.herts.ac.uk>, Nicolaas Vroom
<nicolaa...@pandora.be> writes:

> > > That means we should not use the parameter omega but:
> > > omaga(k), omega(L), or omega(M)
> >
> > As long as they are defined (usually the case in a paper but often not
> > in usenet posts), there is no ambiguity.
>
> > > The issue is that as a result of my calculations
> > > the smallest errors are for cosmological models with
> > > Lambda (= Cosmological Constant) > 0
> >
> > This agrees with what most people take to be the best guess of the
> > values of the cosmological parameters: lambda=0.73 and Omega=0.27 giving
> > Omega+lambda=1 within the observational errors.
>
> I think what you write is confusing.

If you read my post then it is clear what is meant.

> I think what you mean is that:
> Omega_L + omega_M + omega_K = 1
> and that Omega_L = 0.73, omega_M = 0.27.
> Also that Omega_k = 0, implying that k=0, but I'am
> not completely sure about this.

Right.

> With Lambda > 0 I mean the cosmological constant > 0
> (i.e. the parameter of the friedmann equation)
> and not omega_Lambda.

THEY ARE ESSENTIALLY THE SAME THING. What in my notation is lower-case
lambda is defined as Lambda/3H^2 where H is the Hubble constant. The
only reason lower-case lambda is not constant in time is because the
Hubble constant is not constant in time. (These are both called
"constants" for different reasons. The Hubble constant is a constant in
that it gives the slope of a line like in y=mx m is constant and x and y
are variables while Lambda, the cosmological constant, is constant in
time. Omega (Omega_matter) is defined as 8*pi*G*rho/3H^2 and varies
with time not only because H varies with time but also because rho
is inversely proportional to the scale factor.)

> This document:
> http://nicadd.niu.edu/~bterzic/PHYS652/Lecture_06.pdf
> uses Omega(M0) + Omega(De0) = 1
> and Lambda = Cosmological Constant (Dark Energy)
> This document:
> http://arxiv.org/pdf/1112.3108v1.pdf
> uses omega(Lambda) and omega(m)
> This document:
> http://www.astro.ucla.edu/~wright/cosmo_constant.html
> uses Omega(M) + Lambda = 1

There are many more. I have seen lambda as one parameter and
lambda+Omega as the other, Omega and lambda+Omega, sigma and q
(sigma=Omega/2 and lambda = sigma - q). There is no hope of a standard
notation emerging any time soon, but this is OK if one defines one's
terms.

> If my assumption is correct that Omega_L = 0.73
> and k=0 than my calculations show that Lambda = 0.06

In what units? (In my notation lambda has dimension time^{-2} but some
people have an extra factor of c^2 in there.)

Nicolaas Vroom

unread,
Apr 12, 2012, 6:05:21 PM4/12/12
to
Op dinsdag 10 april 2012 23:21:41 UTC+2 schreef Phillip Helbig---undress to reply het volgende:
> In article <mt2.0-28039...@hydra.herts.ac.uk>, Nicolaas Vroom
> <nicolaa...@pandora.be> writes:
>
>
> > With Lambda > 0 I mean the cosmological constant > 0
> > (i.e. the parameter of the friedmann equation)
> > and not omega_Lambda.
>
> THEY ARE ESSENTIALLY THE SAME THING.

I agree.
However I have the impression that most recent documents use:
omega_lamda and omega_m when they mean rho(Lambda)/rho0
or rho(M)/rho0
and Lambda for the cosmological constant (Dark energy)

> > If my assumption is correct that Omega_L = 0.73
> > and k=0 than my calculations show that Lambda = 0.06
>
> In what units? (In my notation lambda has dimension time^{-2} but some
> people have an extra factor of c^2 in there.)

Again I made a typo. This should be Lambda=0.006
In my document c=1
The distance R is in billion Light years.
Lambda is Unity.

It is interesting to read the document: http://arxiv.org/abs/1105.3470
Specific Table 8 at page 22. (The text is at the bottom of page 19)
Bin #3 shows a Omega(Lambda) value of 0.23(0.33) for large values of z.
That means a much smaller value as mentioned above.
This value is much more in agreement which my results.
(Lambda in this case will be larger than 0.006

Nicolaas Vroom
http://users.telenet.be/nicvroom/friedmann's%20equation.htm#Q9.1

Phillip Helbig---undress to reply

unread,
Apr 13, 2012, 3:44:07 AM4/13/12
to
In article <mt2.0-19549...@hydra.herts.ac.uk>, Nicolaas Vroom
<nicolaa...@pandora.be> writes:

> However I have the impression that most recent documents use:
> omega_lamda and omega_m when they mean rho(Lambda)/rho0
> or rho(M)/rho0
> and Lambda for the cosmological constant (Dark energy)

That's not my impression. What is rho0?

> > > If my assumption is correct that Omega_L = 0.73
> > > and k=0 than my calculations show that Lambda = 0.06
> >
> > In what units? (In my notation lambda has dimension time^{-2} but some
> > people have an extra factor of c^2 in there.)
>
> Again I made a typo. This should be Lambda=0.006
> In my document c=1
> The distance R is in billion Light years.
> Lambda is Unity.

Yes, but what units in combination of powers of kg, m and s?

> It is interesting to read the document: http://arxiv.org/abs/1105.3470
> Specific Table 8 at page 22. (The text is at the bottom of page 19)
> Bin #3 shows a Omega(Lambda) value of 0.23(0.33) for large values of z.
> That means a much smaller value as mentioned above.
> This value is much more in agreement which my results.
> (Lambda in this case will be larger than 0.006

Just a general comment (not sure if it applies here): Omega and lambda
in general change with time. While most people speak of determining the
current values Omega_0 and lambda_0---which is sufficient since this
determines the entire history of the universe (i.e. trajectories in the
lambda-Omega parameter space do not cross), one can of course speak of
the value of these parameters at a given redshift, meaning the values
for lambda_0 and Omega_0 one would obtain were the test done at the time
corresponding to that redshift. So (again, not sure if this is what the
above reference is talking about) one could speak of the value of lambda
and Omega at various redshifts, i.e. their evolution. If the
cosmological constant is positive and Omega>0, then at the big bang
lambda is arbitrarily close to 0 and Omega is arbitrarily close to 1 and
in the infinite future it is vice versa (if there is an infinite future;
if the universe collapses, then lambda and Omega evolve from their
initial values to infinity and back).
0 new messages