Evidence for, and implications of, fine-tuning

219 views
Skip to first unread message

Jason Resch

unread,
Oct 14, 2020, 10:38:40 PM10/14/20
to Everything List
I just finished an article on all the science behind fine-tuning, and how the evidence suggests an infinite, and possibly complete reality. I thought others on this list might appreciate it:

I welcome any discussion, feedback, or corrections.

Jason

Lawrence Crowell

unread,
Oct 15, 2020, 6:48:47 AM10/15/20
to Everything List
There is nothing wrong in particular with the idea of fine tuning. This does not logically imply a fine tuner. If there is a fine tuner, then it is reasonable to say there is fine tuning. However, the converse or modus tolens does not hold; fine tuning does not logically imply a fine tuner. Therefore, fine tuning is a necessary condition of a fine tuner, but not sufficient.

I started reading this, but it is clearly not something I am going to finish over early morning coffee. Yet the article so far covers in layman's terms stuff I am well acquainted with. The multiverse is often cited as a way around this. A vast plurality of cosmologies is a way to argue how the particular observable cosmos is fine tuned. It is similar to the argument with planets; given a large number of them it is not surprising that a few are such that life may emerge. Of course with this multiverse I suspect that many of these are not real cosmologies. 

The cosmological constant for all putative cosmologies in the string landscape, based on D-brane theory with gauge fluxes through branes wrapped on Calabi-Yau spaces, have cosmological constants Λ much larger than that for the observable universe. The Hubble constant H = (a'/a), a the scale factor and a' = da/dt, also equals H = √(Λc^2/3) is numerically H = 72km/sec-Mpc and 68km/sec-Mpc, where these two come from galaxy data and CMB data. This corresponds to a cosmological constant Λ ≃ 10^{-52}m^{-2}. Most putative cosmologies have much larger values, and many orders of magnitude larger. Such a de Sitter or FLRW spacetime would expand so rapidly that nothing could form. In fact many have Λ ≃ 10^{66}m/s^2 with the upper bound Λ ≃ 10^{70}m/s^2. The difference between this and what we observe is the 122 order of magnitude issue. 

The observed cosmological constant is a manifestation of the quantum vacuum energy density, or in particular that vacuum energy density that plays a role in gravitation. This vacuum energy ρ defines the cosmological constant Λ = 8πGρ/3c^3 and for the observable universe this is quite small, far smaller than the 123 order of magnitude larger figure a naïve summation of QFT modes would suggest. However, there is a difference between the high energy vacuum, or called false vacuum, and the low energy physical vacuum. A quantum tunneling from the false to physical vacuum results in a gap of mass-energy density in every volume of space, and this generates matter and radiation. The sort of skewed Ginsburg-Landau potential involved is seen in the figure below.
quartic asymmetric potential.png

There is a linear term in fields that skews this, and this I think is some manifestation of renormalization theory, where the large majority of these are analogous to virtual particles that give a mass-renormalization of cosmologies. This would I think sweep the vast majority of these out of ontological existence or classicality. I do not know if this is complete so there is the reduction of the multiverse to a single universe, or whether this is a reduction of the multiverse to a much smaller set.

It has to be noted that the tuning for flat, spherical or hyperbolic geometry or topology of a spatial surface is not that hard to understand. The Hamiltonian for the Friedman-Lemaitre-Robertson-Walker (FLRW) spacetime is

ℋ = ½(a’/a)^2 - 4πGρ/3c^2 + k/a^2,

so that the Hamiltonian constraint Nℋ = 0 in ADM general relativity means it is not hard to see this is zero. The energy density is ρ = ρ_vac + ρ_energy for the vacuum and mass-energy in the spacetime. The additional term k/a^2 gives flat, spherical and hyperbolic space for k = 0, k = 1 and k = -1. If k = 0 then the vacuum energy density is constant. This is in various ways more reasonable.

In this renormalization possibility somehow the observable universe may have emerged. In ways not entirely clear this may have selected the world we observe. So there are open questions. Maybe even the role of conscious observers in the universe play some Wheeler delayed choice experiment in measuring the early universe to select for the observed universe. 

LC

Jason Resch

unread,
Oct 15, 2020, 10:16:18 AM10/15/20
to Everything List
Hi Lawrence,

First I want to thank you for your highly detailed reply. I have some further comments and questions below, if you don't mind.

On Thu, Oct 15, 2020 at 5:48 AM Lawrence Crowell <goldenfield...@gmail.com> wrote:
There is nothing wrong in particular with the idea of fine tuning. This does not logically imply a fine tuner. If there is a fine tuner, then it is reasonable to say there is fine tuning. However, the converse or modus tolens does not hold; fine tuning does not logically imply a fine tuner. Therefore, fine tuning is a necessary condition of a fine tuner, but not sufficient.

Towards the end I use fine-tuning, and Bayesian inference to decide the trilemma as defined by Martin Rees: coincidence, providence, or multiverse.

Given the appearance of fine tuning, we update our priors and effectively rule out coincidence and providence with high confidence. So we cannot decide there is a fine tuner, but we can be confident in "not coincidence" whose probability is equal to (fine-tuner or multiverse). The article concludes with a decision that both answers imply the existence of something beyond this universe, and quite plausibly the existence of universes of a higher order and complexity than our own, containing entities superior to ourselves.

 

I started reading this, but it is clearly not something I am going to finish over early morning coffee. Yet the article so far covers in layman's terms stuff I am well acquainted with. The multiverse is often cited as a way around this. A vast plurality of cosmologies is a way to argue how the particular observable cosmos is fine tuned. It is similar to the argument with planets; given a large number of them it is not surprising that a few are such that life may emerge. Of course with this multiverse I suspect that many of these are not real cosmologies. 

The cosmological constant for all putative cosmologies in the string landscape, based on D-brane theory with gauge fluxes through branes wrapped on Calabi-Yau spaces, have cosmological constants Λ much larger than that for the observable universe. The Hubble constant H = (a'/a), a the scale factor and a' = da/dt, also equals H = √(Λc^2/3) is numerically H = 72km/sec-Mpc and 68km/sec-Mpc, where these two come from galaxy data and CMB data. This corresponds to a cosmological constant Λ ≃ 10^{-52}m^{-2}. Most putative cosmologies have much larger values, and many orders of magnitude larger. Such a de Sitter or FLRW spacetime would expand so rapidly that nothing could form. In fact many have Λ ≃ 10^{66}m/s^2 with the upper bound Λ ≃ 10^{70}m/s^2. The difference between this and what we observe is the 122 order of magnitude issue. 


Given the uncertainties around the probability distributions for the other constants of nature, the article uses Λ as the chief variable in deriving the improbability of the tuning.

Is there a difference assumed between how Λ emerges in string theory vs. how it is assumed to emerge from quantum field theory?  Is it, in both cases, the sum of order-one positive and negative numbers?

I have seen some say it is tuned to 60decimal places, and others that it is tuned to 120 decimal places. What accounts for this difference in estimation, is it based on the assumption of supersymmetry?
 

The observed cosmological constant is a manifestation of the quantum vacuum energy density, or in particular that vacuum energy density that plays a role in gravitation. This vacuum energy ρ defines the cosmological constant Λ = 8πGρ/3c^3 and for the observable universe this is quite small, far smaller than the 123 order of magnitude larger figure a naïve summation of QFT modes would suggest. However, there is a difference between the high energy vacuum, or called false vacuum, and the low energy physical vacuum. A quantum tunneling from the false to physical vacuum results in a gap of mass-energy density in every volume of space, and this generates matter and radiation. The sort of skewed Ginsburg-Landau potential involved is seen in the figure below.
quartic asymmetric potential.png



This is something I wondered about. Is it assumed that a high Λ (or high vacuum energy) is what powered inflation, and then later this decayed to its much smaller value, which drives a doubling in billions of years rather than in 10^-35 seconds? Wouldn't that require one of the quantum fields to disappear, or at least undergo significant change?


 
There is a linear term in fields that skews this, and this I think is some manifestation of renormalization theory, where the large majority of these are analogous to virtual particles that give a mass-renormalization of cosmologies. This would I think sweep the vast majority of these out of ontological existence or classicality. I do not know if this is complete so there is the reduction of the multiverse to a single universe, or whether this is a reduction of the multiverse to a much smaller set.

It has to be noted that the tuning for flat, spherical or hyperbolic geometry or topology of a spatial surface is not that hard to understand. The Hamiltonian for the Friedman-Lemaitre-Robertson-Walker (FLRW) spacetime is

ℋ = ½(a’/a)^2 - 4πGρ/3c^2 + k/a^2,

so that the Hamiltonian constraint Nℋ = 0 in ADM general relativity means it is not hard to see this is zero. The energy density is ρ = ρ_vac + ρ_energy for the vacuum and mass-energy in the spacetime. The additional term k/a^2 gives flat, spherical and hyperbolic space for k = 0, k = 1 and k = -1. If k = 0 then the vacuum energy density is constant. This is in various ways more reasonable.

In this renormalization possibility somehow the observable universe may have emerged. In ways not entirely clear this may have selected the world we observe. So there are open questions. Maybe even the role of conscious observers in the universe play some Wheeler delayed choice experiment in measuring the early universe to select for the observed universe. 

I've thought about this with regards to the measurements of the constants. If we imagine measuring constants to more and more decimal places, and get so far along that we reach decimal places no longer significant to fine-tuning or AP, then do we reach a point where we are exploring a random variable and getting back random digits for those constants? (In effect, collapsing them from their prior state of being undetermined).

Jason
 

LC


On Wednesday, October 14, 2020 at 9:38:40 PM UTC-5 Jason wrote:
I just finished an article on all the science behind fine-tuning, and how the evidence suggests an infinite, and possibly complete reality. I thought others on this list might appreciate it:

I welcome any discussion, feedback, or corrections.

Jason

--
You received this message because you are subscribed to the Google Groups "Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email to everything-li...@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/everything-list/c67a54a2-64bc-4818-b8d5-c9bcf361940en%40googlegroups.com.

Lawrence Crowell

unread,
Oct 15, 2020, 1:24:51 PM10/15/20
to Everything List

This is the second time. I tend to work mostly off line. That way I do not have an open port, and in some ways this is a big part of my defense against malware and hacking. If I am not online I can’t be attacked. However, in writing in the group editor, big mistake, I hit send and my message disappeared. Writing one long thing in Word, again off-line.

I read past the point of Hoyle’s tri-alpha physics. Too bad he did not get the Nobel for that, even though he was wrong on steady state theory. I have not gotten to the point about coincidence, providence and multiverse. 

My thinking is that what is real is a quantum mechanical issue. Reality is the postulate that a system has some existential content prior to a measurement that is related to the outcome of that measurement. The EPR argument and Bell inequalities show you can’t have locality and reality applied as postulates to a system. You can use one or the other, but not both. So what is real, certainly if we appeal to Bohr is the classical world. The classical state of the universe is a set of quantum states that are stable against quantum noise and decoherence. 

The upper bound on the cosmological constant is Λ = 1/ℓ_p^2 for ℓ_p the Planck length of 10^{-35}m. Therefore the Planck value of a cosmological constant for a quantum cosmology is Λ = 10^{70}m^{-2}. This is evaluate from 

〈0|H|0〉 = sum_{n=0}^∞nħω = E_{planck} (with cut off at Planck energy_) 

and with cosmological constant this is ~ 1/E_planck^2 = 10^{70}m^{-2} the observed is 10^{-52}m^{-2}. This is the source of the conundrum. What we observe is Λ = 10^{-52}m^{-2}. This is the source of this huge disparity. The Higgs field, which bears some relationship IMO to the quartic potential of inflationary cosmology, has M = 125GeV and it in a condensate with the weak interaction bosons confers mass to them. The Yukawa Lagrangians give fermions mass. This is very small, far smaller than the Planck energy. This with the wide gap in cosmological constants enforces a classicality. The domain of quantum gravitation is so far removed from quantum physics that decoherent large masses obey classical physics. Classicaliity in some way is what is reality.

In string/M-theory the cosmological constant emerges from Yang-Mills gauge fluxes through D-branes wrapped on Calabi-Yau compactified spaces. There are 10^{500} or more of these configurations, so this is a huge sample space. This is computed with the Hodge triangle of Eguchi-Hansen 3-forms. This is only really known for a static situation, which is still tough. Then there is the Vafa swampland, where it turns out strings and branes do not work in spacetimes with Λ > 0, and so things are broken here.

Finally when it comes to observers if k = 0 there are an infinite number of them and it might then be Wheeler delayed choice is an ensemble. This delayed choice measurement is where the slit an electron passed through is given by a measurement after the wave has passed the slits. So IGUS or ET in the universe may fix these values through their measurements. 

LC

Brent Meeker

unread,
Oct 15, 2020, 2:56:21 PM10/15/20
to everyth...@googlegroups.com
You should have read Vic Stenger's "The Fallacy of Fine Tuning".  Vic points out how many examples of  fine tuning are mis-conceived...including Hoyle's prediction of an excited state of carbon.  Vic also points out the fallacy of just considering one parameter when the parameter space is high dimensional.

But my general criticism of fine-tuning is two-fold.  First, the concept is not well defined.  There is no apriori probability distribution over possible values.  If the possible values are infinite, then any realized value is improbable.  Fine tuning is all in the intuition.  Charts are drawn showing little "we are here" zones to prove the fine tuning.  But the scales are sometimes linear, sometimes logarithmic.  And why those parameters and not the square?...or the square root?  Bayesian inference is not invariant under change of parameters.

Second, calling it "fine-tuning" implies some kind of process of "tuning" or "selection".  But that's gratuitous.  Absent supernatural miracles, we must find ourselves in a universe in which we are nomologically possible.  And that is true whether there is one universe or infinitely many.  So it cannot be evidence one way or the other for the number of universes.

Brent
--
You received this message because you are subscribed to the Google Groups "Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email to everything-li...@googlegroups.com.

Jason Resch

unread,
Oct 15, 2020, 3:46:22 PM10/15/20
to Everything List
On Thu, Oct 15, 2020 at 1:56 PM 'Brent Meeker' via Everything List <everyth...@googlegroups.com> wrote:
You should have read Vic Stenger's "The Fallacy of Fine Tuning".  Vic points out how many examples of  fine tuning are mis-conceived...including Hoyle's prediction of an excited state of carbon.  Vic also points out the fallacy of just considering one parameter when the parameter space is high dimensional.

Hi Brent,

Thanks for the suggestions. I did read Barnes's critique of TFOFT ( https://arxiv.org/abs/1112.4647 ) and I just now read Stenger's reply: https://arxiv.org/pdf/1202.4359.pdf

I think they both make some valid points. It may be that many parameters we believe are fine tuned will turn out to have other explanations. But I also think in domains where we do have understandings, such as in computational models (such as algorithmic information thery: what is the shortest program that produces X), or in the set of all possible cellular automata that only consider the states of adjacent cells, the number that are interesting (neither too simple nor too chaotic) is a small fraction of the total. So there is probably fine tuning, but it is, as you mention, extremely hard to quantify.
 

But my general criticism of fine-tuning is two-fold.  First, the concept is not well defined.  There is no apriori probability distribution over possible values.  If the possible values are infinite, then any realized value is improbable.  Fine tuning is all in the intuition.  Charts are drawn showing little "we are here" zones to prove the fine tuning.  But the scales are sometimes linear, sometimes logarithmic.  And why those parameters and not the square?...or the square root?  Bayesian inference is not invariant under change of parameters.

At least for the cosmological constant, there seems to be some understanding of its probability distribution, and it is relatively independent of the other parameters in that it is unrelated to nucleosynthesis, chemistry, etc. Therefore it is our best candidate to consider in isolation from the other parameters in the high-dimensional space.
 

Second, calling it "fine-tuning" implies some kind of process of "tuning" or "selection".  But that's gratuitous.  Absent supernatural miracles, we must find ourselves in a universe in which we are nomologically possible.  And that is true whether there is one universe or infinitely many.  So it cannot be evidence one way or the other for the number of universes.

Let's say we did have an understanding of the distribution of possible universes and the fraction of which supported conscious life. If we discover the fraction to be 1 in 1,000,000 would this not motivate a belief in there being more than one universe?

Jason

 

Brent

On 10/14/2020 7:38 PM, Jason Resch wrote:
I just finished an article on all the science behind fine-tuning, and how the evidence suggests an infinite, and possibly complete reality. I thought others on this list might appreciate it:

I welcome any discussion, feedback, or corrections.

Jason
--
You received this message because you are subscribed to the Google Groups "Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email to everything-li...@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/everything-list/CA%2BBCJUiipTLGN%3DLGdhyUMKMLPRvpUhxJk77rwvmLvgyf252EjA%40mail.gmail.com.

--
You received this message because you are subscribed to the Google Groups "Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email to everything-li...@googlegroups.com.

Brent Meeker

unread,
Oct 15, 2020, 4:54:00 PM10/15/20
to everyth...@googlegroups.com


On 10/15/2020 12:46 PM, Jason Resch wrote:


On Thu, Oct 15, 2020 at 1:56 PM 'Brent Meeker' via Everything List <everyth...@googlegroups.com> wrote:
You should have read Vic Stenger's "The Fallacy of Fine Tuning".  Vic points out how many examples of  fine tuning are mis-conceived...including Hoyle's prediction of an excited state of carbon.  Vic also points out the fallacy of just considering one parameter when the parameter space is high dimensional.

Hi Brent,

Thanks for the suggestions. I did read Barnes's critique of TFOFT ( https://arxiv.org/abs/1112.4647 ) and I just now read Stenger's reply: https://arxiv.org/pdf/1202.4359.pdf

I think they both make some valid points. It may be that many parameters we believe are fine tuned will turn out to have other explanations. But I also think in domains where we do have understandings, such as in computational models (such as algorithmic information thery: what is the shortest program that produces X), or in the set of all possible cellular automata that only consider the states of adjacent cells, the number that are interesting (neither too simple nor too chaotic) is a small fraction of the total. So there is probably fine tuning, but it is, as you mention, extremely hard to quantify.
 

But my general criticism of fine-tuning is two-fold.  First, the concept is not well defined.  There is no apriori probability distribution over possible values.  If the possible values are infinite, then any realized value is improbable.  Fine tuning is all in the intuition.  Charts are drawn showing little "we are here" zones to prove the fine tuning.  But the scales are sometimes linear, sometimes logarithmic.  And why those parameters and not the square?...or the square root?  Bayesian inference is not invariant under change of parameters.

At least for the cosmological constant, there seems to be some understanding of its probability distribution, and it is relatively independent of the other parameters in that it is unrelated to nucleosynthesis, chemistry, etc. Therefore it is our best candidate to consider in isolation from the other parameters in the high-dimensional space.
 

Second, calling it "fine-tuning" implies some kind of process of "tuning" or "selection".  But that's gratuitous.  Absent supernatural miracles, we must find ourselves in a universe in which we are nomologically possible.  And that is true whether there is one universe or infinitely many.  So it cannot be evidence one way or the other for the number of universes.

Let's say we did have an understanding of the distribution of possible universes and the fraction of which supported conscious life. If we discover the fraction to be 1 in 1,000,000 would this not motivate a belief in there being more than one universe?

No, because it is equally evidence that one universe (this one) was realized out of the ensemble.  You are relying on an intuition that it is easier to explain why all 1,000,000 exist than to explain why this one exists.  But that's an intuition about explaining things, not about any objective probability.  Every day things happen that are more improbable than a million-to-one.  Until Everett no one thought it necessary to suppose all the counterfactuals happened "somewhere else".

Brent


Jason

 

Brent

On 10/14/2020 7:38 PM, Jason Resch wrote:
I just finished an article on all the science behind fine-tuning, and how the evidence suggests an infinite, and possibly complete reality. I thought others on this list might appreciate it:

I welcome any discussion, feedback, or corrections.

Jason
--
You received this message because you are subscribed to the Google Groups "Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email to everything-li...@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/everything-list/CA%2BBCJUiipTLGN%3DLGdhyUMKMLPRvpUhxJk77rwvmLvgyf252EjA%40mail.gmail.com.

--
You received this message because you are subscribed to the Google Groups "Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email to everything-li...@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/everything-list/39a9adbd-c687-634c-736a-3cfb940d6cd1%40verizon.net.
--
You received this message because you are subscribed to the Google Groups "Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email to everything-li...@googlegroups.com.

Bruno Marchal

unread,
Oct 16, 2020, 7:44:32 AM10/16/20
to everyth...@googlegroups.com
Fine-tuning + physical realism implies the many-things (many-worlds or many-histories, or many indexical relative-state).

But fine tuning is a bit like superdeterminisme, it is a not much a theory, than something in need to be explained.

Fine-tuning + some hypothesis like existence and unicity of a “universe” might be seen as an evidence for a designer, but it make it only consistent or possible, which is far less than necessary. God or Universe ontological commitment are, like super-determinisms more like tool to abandon the research instead of digging on a problem, and discover perhaps something new. Fine tuning is close to being tautological, and cannot be used in an explanation, even if true.

With Digital Mechanism, there I no matter of choice, adding anything ontological to any universal machinery brings a contradiction, and the “many-worlds” is reduced to the many-computations, which we know to be emulated in the arithmetical reality (indeed the common part of arithmetic already assumed by all scientists, consciously or not). Now, with digital mechanism, the fine tuning is organised by the modes of self-reference, and all universal machine have the same modes, and their fist person perspective can be seen as self-fine tuner. (Even more so for []p & <>t and even still more so the same with the “& p”).

What do you mean by “infinite complete reality”?  Realities or models are complete by definition. Also, “reality” is always ambiguous, as we don’t know if this refers to an arithmetical reality or a physical realty, or a psychological reality, etc.

Bruno

Jason Resch

unread,
Oct 18, 2020, 1:01:56 PM10/18/20
to Everything List
On Thu, Oct 15, 2020 at 3:54 PM 'Brent Meeker' via Everything List <everyth...@googlegroups.com> wrote:


On 10/15/2020 12:46 PM, Jason Resch wrote:


On Thu, Oct 15, 2020 at 1:56 PM 'Brent Meeker' via Everything List <everyth...@googlegroups.com> wrote:
You should have read Vic Stenger's "The Fallacy of Fine Tuning".  Vic points out how many examples of  fine tuning are mis-conceived...including Hoyle's prediction of an excited state of carbon.  Vic also points out the fallacy of just considering one parameter when the parameter space is high dimensional.

Hi Brent,

Thanks for the suggestions. I did read Barnes's critique of TFOFT ( https://arxiv.org/abs/1112.4647 ) and I just now read Stenger's reply: https://arxiv.org/pdf/1202.4359.pdf

I think they both make some valid points. It may be that many parameters we believe are fine tuned will turn out to have other explanations. But I also think in domains where we do have understandings, such as in computational models (such as algorithmic information thery: what is the shortest program that produces X), or in the set of all possible cellular automata that only consider the states of adjacent cells, the number that are interesting (neither too simple nor too chaotic) is a small fraction of the total. So there is probably fine tuning, but it is, as you mention, extremely hard to quantify.
 

But my general criticism of fine-tuning is two-fold.  First, the concept is not well defined.  There is no apriori probability distribution over possible values.  If the possible values are infinite, then any realized value is improbable.  Fine tuning is all in the intuition.  Charts are drawn showing little "we are here" zones to prove the fine tuning.  But the scales are sometimes linear, sometimes logarithmic.  And why those parameters and not the square?...or the square root?  Bayesian inference is not invariant under change of parameters.

At least for the cosmological constant, there seems to be some understanding of its probability distribution, and it is relatively independent of the other parameters in that it is unrelated to nucleosynthesis, chemistry, etc. Therefore it is our best candidate to consider in isolation from the other parameters in the high-dimensional space.
 

Second, calling it "fine-tuning" implies some kind of process of "tuning" or "selection".  But that's gratuitous.  Absent supernatural miracles, we must find ourselves in a universe in which we are nomologically possible.  And that is true whether there is one universe or infinitely many.  So it cannot be evidence one way or the other for the number of universes.

Let's say we did have an understanding of the distribution of possible universes and the fraction of which supported conscious life. If we discover the fraction to be 1 in 1,000,000 would this not motivate a belief in there being more than one universe?

No, because it is equally evidence that one universe (this one) was realized out of the ensemble.  You are relying on an intuition that it is easier to explain why all 1,000,000 exist than to explain why this one exists.

I don't see why that follows. If we dispense with all intuition, and erase all prior biases, then the Baysian priors for the two mutually exclusive, and collectively exhaustive theories: "fewer than two universes exist" and "more than one universe exists" should be equal. Do you agree so far?

Then upon discovering that "at least one universe exists and it supports life", we don't necessarily update either posterior probability, but we could refine the former statement to "one universes exists" since we have ruled out "zero universes exist".

Then should we one day discover "only 1 out of a 1,000,000 possible universes support life", should we not update our posterior probabilities again such that the "more than one universe exists" has at least twice the probability of "one universe exists"?

Why or why not?

Jason

 
  But that's an intuition about explaining things, not about any objective probability.  Every day things happen that are more improbable than a million-to-one.  Until Everett no one thought it necessary to suppose all the counterfactuals happened "somewhere else".

Brent


Jason

 

Brent

On 10/14/2020 7:38 PM, Jason Resch wrote:
I just finished an article on all the science behind fine-tuning, and how the evidence suggests an infinite, and possibly complete reality. I thought others on this list might appreciate it:

I welcome any discussion, feedback, or corrections.

Jason
--
You received this message because you are subscribed to the Google Groups "Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email to everything-li...@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/everything-list/CA%2BBCJUiipTLGN%3DLGdhyUMKMLPRvpUhxJk77rwvmLvgyf252EjA%40mail.gmail.com.

--
You received this message because you are subscribed to the Google Groups "Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email to everything-li...@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/everything-list/39a9adbd-c687-634c-736a-3cfb940d6cd1%40verizon.net.
--
You received this message because you are subscribed to the Google Groups "Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email to everything-li...@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/everything-list/CA%2BBCJUiDSFtDjH%2BVN0j-6q%2BTUNq0N9c-25hd-cZJJowjciOSsg%40mail.gmail.com.

--
You received this message because you are subscribed to the Google Groups "Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email to everything-li...@googlegroups.com.

Jason Resch

unread,
Oct 18, 2020, 1:08:40 PM10/18/20
to Everything List
On Fri, Oct 16, 2020 at 6:44 AM Bruno Marchal <mar...@ulb.ac.be> wrote:

On 15 Oct 2020, at 04:38, Jason Resch <jason...@gmail.com> wrote:

I just finished an article on all the science behind fine-tuning, and how the evidence suggests an infinite, and possibly complete reality. I thought others on this list might appreciate it:

I welcome any discussion, feedback, or corrections.


Fine-tuning + physical realism implies the many-things (many-worlds or many-histories, or many indexical relative-state).

But fine tuning is a bit like superdeterminisme, it is a not much a theory, than something in need to be explained.

I agree, fine-tuning is something that calls for explanation, an apparent mystery,
 

Fine-tuning + some hypothesis like existence and unicity of a “universe” might be seen as an evidence for a designer, but it make it only consistent or possible, which is far less than necessary. God or Universe ontological commitment are, like super-determinisms more like tool to abandon the research instead of digging on a problem, and discover perhaps something new. Fine tuning is close to being tautological, and cannot be used in an explanation, even if true.

I consider the appearance of fine-tuning as evidence for a reality that is much greater than the universe we can see.
 

With Digital Mechanism, there I no matter of choice, adding anything ontological to any universal machinery brings a contradiction, and the “many-worlds” is reduced to the many-computations, which we know to be emulated in the arithmetical reality (indeed the common part of arithmetic already assumed by all scientists, consciously or not). Now, with digital mechanism, the fine tuning is organised by the modes of self-reference, and all universal machine have the same modes, and their fist person perspective can be seen as self-fine tuner. (Even more so for []p & <>t and even still more so the same with the “& p”).

What do you mean by “infinite complete reality”?

By complete I mean that anything that is possible to exist does. I see your point though that "possible" depends on the model one assumes, which I did leave open.
 
 Realities or models are complete by definition. Also, “reality” is always ambiguous, as we don’t know if this refers to an arithmetical reality or a physical realty, or a psychological reality, etc.


That's a good point, and it does need clarification. I will be sure to make the assumed model clear when I write on the subject of "why does anything exist", for which arithmetic appears to be the simplest model compatible with our current observations.

Jason

Bruno Marchal

unread,
Oct 20, 2020, 8:39:17 AM10/20/20
to everyth...@googlegroups.com
On 15 Oct 2020, at 20:56, 'Brent Meeker' via Everything List <everyth...@googlegroups.com> wrote:

You should have read Vic Stenger's "The Fallacy of Fine Tuning".  Vic points out how many examples of  fine tuning are mis-conceived...including Hoyle's prediction of an excited state of carbon.  Vic also points out the fallacy of just considering one parameter when the parameter space is high dimensional.

But my general criticism of fine-tuning is two-fold.  First, the concept is not well defined.  There is no apriori probability distribution over possible values.  If the possible values are infinite, then any realized value is improbable. 


I don’t think so. That is why Kolmogorov defines a measure space by forbidding infinite intersection of events. In the finite case the space of events is the complete boolean structure coming from the subset of the set of the possible results. In the infinite domain, the measure space os defined by a strict subset. I miss perhaps something, but the axiomatic of Kolmogorov has been invented to solve that “infinite number of value” problem. 

But I do agree that fine-tuning is not always well defined and sometimes misused. Yet I agree that the choice is between a fine tuner (but who is it, and how does it the selection. Even if real, a fine tuner explains nothing without some explanation of where the fine tuner comes from. In a multiverse or milti-computations (le the sigma_1 arithmetic) consciousness is the fine tuner, and that one is explained already by the (Löbianà universal machine.





Fine tuning is all in the intuition.  Charts are drawn showing little "we are here" zones to prove the fine tuning.  But the scales are sometimes linear, sometimes logarithmic.  And why those parameters and not the square?...or the square root?  Bayesian inference is not invariant under change of parameters.

That depends on your OMEGA in the probability space, and the measure you put on the set of events.




Second, calling it "fine-tuning" implies some kind of process of "tuning" or "selection".  But that's gratuitous. 

Yes. Ad Hoc, and it hides the problem by a bigger problem,  instead of solving it. 



Absent supernatural miracles, we must find ourselves in a universe in which we are nomologically possible.

That will be the relative histories with measure near one. Sort of history-neighbourhoods.



  And that is true whether there is one universe or infinitely many.

… or none.


  So it cannot be evidence one way or the other for the number of universes.

To count the universes, we should be able to be clearer on what such term means.

Bruno





Brent

On 10/14/2020 7:38 PM, Jason Resch wrote:
I just finished an article on all the science behind fine-tuning, and how the evidence suggests an infinite, and possibly complete reality. I thought others on this list might appreciate it:

I welcome any discussion, feedback, or corrections.

Jason
--
You received this message because you are subscribed to the Google Groups "Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email to everything-li...@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/everything-list/CA%2BBCJUiipTLGN%3DLGdhyUMKMLPRvpUhxJk77rwvmLvgyf252EjA%40mail.gmail.com.


--
You received this message because you are subscribed to the Google Groups "Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email to everything-li...@googlegroups.com.

Bruno Marchal

unread,
Oct 20, 2020, 8:44:48 AM10/20/20
to everyth...@googlegroups.com
On 15 Oct 2020, at 22:53, 'Brent Meeker' via Everything List <everyth...@googlegroups.com> wrote:



On 10/15/2020 12:46 PM, Jason Resch wrote:


On Thu, Oct 15, 2020 at 1:56 PM 'Brent Meeker' via Everything List <everyth...@googlegroups.com> wrote:
You should have read Vic Stenger's "The Fallacy of Fine Tuning".  Vic points out how many examples of  fine tuning are mis-conceived...including Hoyle's prediction of an excited state of carbon.  Vic also points out the fallacy of just considering one parameter when the parameter space is high dimensional.

Hi Brent,

Thanks for the suggestions. I did read Barnes's critique of TFOFT ( https://arxiv.org/abs/1112.4647 ) and I just now read Stenger's reply: https://arxiv.org/pdf/1202.4359.pdf

I think they both make some valid points. It may be that many parameters we believe are fine tuned will turn out to have other explanations. But I also think in domains where we do have understandings, such as in computational models (such as algorithmic information thery: what is the shortest program that produces X), or in the set of all possible cellular automata that only consider the states of adjacent cells, the number that are interesting (neither too simple nor too chaotic) is a small fraction of the total. So there is probably fine tuning, but it is, as you mention, extremely hard to quantify.
 

But my general criticism of fine-tuning is two-fold.  First, the concept is not well defined.  There is no apriori probability distribution over possible values.  If the possible values are infinite, then any realized value is improbable.  Fine tuning is all in the intuition.  Charts are drawn showing little "we are here" zones to prove the fine tuning.  But the scales are sometimes linear, sometimes logarithmic.  And why those parameters and not the square?...or the square root?  Bayesian inference is not invariant under change of parameters.

At least for the cosmological constant, there seems to be some understanding of its probability distribution, and it is relatively independent of the other parameters in that it is unrelated to nucleosynthesis, chemistry, etc. Therefore it is our best candidate to consider in isolation from the other parameters in the high-dimensional space.
 

Second, calling it "fine-tuning" implies some kind of process of "tuning" or "selection".  But that's gratuitous.  Absent supernatural miracles, we must find ourselves in a universe in which we are nomologically possible.  And that is true whether there is one universe or infinitely many.  So it cannot be evidence one way or the other for the number of universes.

Let's say we did have an understanding of the distribution of possible universes and the fraction of which supported conscious life. If we discover the fraction to be 1 in 1,000,000 would this not motivate a belief in there being more than one universe?

No, because it is equally evidence that one universe (this one) was realized out of the ensemble.  You are relying on an intuition that it is easier to explain why all 1,000,000 exist than to explain why this one exists.  But that's an intuition about explaining things, not about any objective probability.  Every day things happen that are more improbable than a million-to-one. 


You need to take all the histories, which we know exists in arithmetic, then consciousness will differentiate on those histories which seems to be fine tuned. Like you say, we have to eliminate the selector, except for consciousness. 

 Until Everett no one thought it necessary to suppose all the counterfactuals happened "somewhere else”.


Well, there was Borgess of course, and the idea is present in the whole neoplatonism, arguably. Then, for any one who believes that 777 is odd independently of him/herself, all computations are run independently of anyone.

Bruno



Brent


Jason

 

Brent

On 10/14/2020 7:38 PM, Jason Resch wrote:
I just finished an article on all the science behind fine-tuning, and how the evidence suggests an infinite, and possibly complete reality. I thought others on this list might appreciate it:

I welcome any discussion, feedback, or corrections.

Jason
--
You received this message because you are subscribed to the Google Groups "Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email to everything-li...@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/everything-list/CA%2BBCJUiipTLGN%3DLGdhyUMKMLPRvpUhxJk77rwvmLvgyf252EjA%40mail.gmail.com.

--
You received this message because you are subscribed to the Google Groups "Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email to everything-li...@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/everything-list/39a9adbd-c687-634c-736a-3cfb940d6cd1%40verizon.net.
--
You received this message because you are subscribed to the Google Groups "Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email to everything-li...@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/everything-list/CA%2BBCJUiDSFtDjH%2BVN0j-6q%2BTUNq0N9c-25hd-cZJJowjciOSsg%40mail.gmail.com.


--
You received this message because you are subscribed to the Google Groups "Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email to everything-li...@googlegroups.com.

Bruno Marchal

unread,
Oct 20, 2020, 8:55:45 AM10/20/20
to everyth...@googlegroups.com
On 18 Oct 2020, at 19:08, Jason Resch <jason...@gmail.com> wrote:



On Fri, Oct 16, 2020 at 6:44 AM Bruno Marchal <mar...@ulb.ac.be> wrote:

On 15 Oct 2020, at 04:38, Jason Resch <jason...@gmail.com> wrote:

I just finished an article on all the science behind fine-tuning, and how the evidence suggests an infinite, and possibly complete reality. I thought others on this list might appreciate it:

I welcome any discussion, feedback, or corrections.


Fine-tuning + physical realism implies the many-things (many-worlds or many-histories, or many indexical relative-state).

But fine tuning is a bit like superdeterminisme, it is a not much a theory, than something in need to be explained.

I agree, fine-tuning is something that calls for explanation, an apparent mystery,
 

Fine-tuning + some hypothesis like existence and unicity of a “universe” might be seen as an evidence for a designer, but it make it only consistent or possible, which is far less than necessary. God or Universe ontological commitment are, like super-determinisms more like tool to abandon the research instead of digging on a problem, and discover perhaps something new. Fine tuning is close to being tautological, and cannot be used in an explanation, even if true.

I consider the appearance of fine-tuning as evidence for a reality that is much greater than the universe we can see.

I agree with you on this. Now, with the Digital Mechanist hypothesis (in cognitive science), we know that we can limit that reality to any model of any elementary theory of arithmetic, which is so big that it is not even definable in any first order theory, not even in ZF, and higher order theory are not really theories at all, unless made effective, but then they are representable in first order theories. The arithmetical reality is 3P big, but very small compared to a model of ZF. But the internal view by universal machine in arithmetic is bigger than a model of ZF. It is unconceivably bigger than ourselves.



 

With Digital Mechanism, there I no matter of choice, adding anything ontological to any universal machinery brings a contradiction, and the “many-worlds” is reduced to the many-computations, which we know to be emulated in the arithmetical reality (indeed the common part of arithmetic already assumed by all scientists, consciously or not). Now, with digital mechanism, the fine tuning is organised by the modes of self-reference, and all universal machine have the same modes, and their fist person perspective can be seen as self-fine tuner. (Even more so for []p & <>t and even still more so the same with the “& p”).

What do you mean by “infinite complete reality”?

By complete I mean that anything that is possible to exist does. I see your point though that "possible" depends on the model one assumes, which I did leave open.

OK.



 
 Realities or models are complete by definition. Also, “reality” is always ambiguous, as we don’t know if this refers to an arithmetical reality or a physical realty, or a psychological reality, etc.


That's a good point, and it does need clarification. I will be sure to make the assumed model clear when I write on the subject of "why does anything exist", for which arithmetic appears to be the simplest model compatible with our current observations.

It is not that simple, as the arithmetical reality, in its entirety, is not axiomatizable. That realm is essentially undecidable, and things get “worst” when viewed from inside. That is why it is the simplest *conceptually* choice. They can not be any another (except for those Turing equivalent) because if you add something, it would need to be non Turing emulable, nor first person recoverable to not being part of the arithmetical machine phenomenology, which would rise a serious doubt before saying yes to a mechanist doctor.

Bruno





Jason

--
You received this message because you are subscribed to the Google Groups "Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email to everything-li...@googlegroups.com.

Brent Meeker

unread,
Oct 20, 2020, 2:23:04 PM10/20/20
to everyth...@googlegroups.com


On 10/20/2020 5:39 AM, Bruno Marchal wrote:

On 15 Oct 2020, at 20:56, 'Brent Meeker' via Everything List <everyth...@googlegroups.com> wrote:

You should have read Vic Stenger's "The Fallacy of Fine Tuning".  Vic points out how many examples of  fine tuning are mis-conceived...including Hoyle's prediction of an excited state of carbon.  Vic also points out the fallacy of just considering one parameter when the parameter space is high dimensional.

But my general criticism of fine-tuning is two-fold.  First, the concept is not well defined.  There is no apriori probability distribution over possible values.  If the possible values are infinite, then any realized value is improbable. 


I don’t think so. That is why Kolmogorov defines a measure space by forbidding infinite intersection of events. In the finite case the space of events is the complete boolean structure coming from the subset of the set of the possible results. In the infinite domain, the measure space os defined by a strict subset. I miss perhaps something, but the axiomatic of Kolmogorov has been invented to solve that “infinite number of value” problem.

That's a non-answer.  I was just using infinite (as physicists do) to mean bigger than anything we're thinking of.  Kolmogorov just shaped his definition to make the mathematics simpler.  There's nothing in Jason's analyses that defines the variables as finite.  Jason just helps jimself to an intuition that a value between 7.5 and 7.7 is "fine-tuned".  He didn't first justify the finite interval. 


But I do agree that fine-tuning is not always well defined and sometimes misused. Yet I agree that the choice is between a fine tuner (but who is it, and how does it the selection. Even if real, a fine tuner explains nothing without some explanation of where the fine tuner comes from. In a multiverse or milti-computations (le the sigma_1 arithmetic) consciousness is the fine tuner, and that one is explained already by the (Löbianà universal machine.





Fine tuning is all in the intuition.  Charts are drawn showing little "we are here" zones to prove the fine tuning.  But the scales are sometimes linear, sometimes logarithmic.  And why those parameters and not the square?...or the square root?  Bayesian inference is not invariant under change of parameters.

That depends on your OMEGA in the probability space, and the measure you put on the set of events.

Exactly so.






Second, calling it "fine-tuning" implies some kind of process of "tuning" or "selection".  But that's gratuitous. 

Yes. Ad Hoc, and it hides the problem by a bigger problem,  instead of solving it. 



Absent supernatural miracles, we must find ourselves in a universe in which we are nomologically possible.

That will be the relative histories with measure near one. Sort of history-neighbourhoods.

By what measure?




  And that is true whether there is one universe or infinitely many.

… or none.


  So it cannot be evidence one way or the other for the number of universes.

To count the universes, we should be able to be clearer on what such term means.

Bruno





Brent

On 10/14/2020 7:38 PM, Jason Resch wrote:
I just finished an article on all the science behind fine-tuning, and how the evidence suggests an infinite, and possibly complete reality. I thought others on this list might appreciate it:

I welcome any discussion, feedback, or corrections.

Jason
--
You received this message because you are subscribed to the Google Groups "Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email to everything-li...@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/everything-list/CA%2BBCJUiipTLGN%3DLGdhyUMKMLPRvpUhxJk77rwvmLvgyf252EjA%40mail.gmail.com.


--
You received this message because you are subscribed to the Google Groups "Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email to everything-li...@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/everything-list/39a9adbd-c687-634c-736a-3cfb940d6cd1%40verizon.net.

--
You received this message because you are subscribed to the Google Groups "Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email to everything-li...@googlegroups.com.

Brent Meeker

unread,
Oct 20, 2020, 2:26:42 PM10/20/20
to everyth...@googlegroups.com


On 10/20/2020 5:44 AM, Bruno Marchal wrote:

On 15 Oct 2020, at 22:53, 'Brent Meeker' via Everything List <everyth...@googlegroups.com> wrote:



On 10/15/2020 12:46 PM, Jason Resch wrote:


On Thu, Oct 15, 2020 at 1:56 PM 'Brent Meeker' via Everything List <everyth...@googlegroups.com> wrote:
You should have read Vic Stenger's "The Fallacy of Fine Tuning".  Vic points out how many examples of  fine tuning are mis-conceived...including Hoyle's prediction of an excited state of carbon.  Vic also points out the fallacy of just considering one parameter when the parameter space is high dimensional.

Hi Brent,

Thanks for the suggestions. I did read Barnes's critique of TFOFT ( https://arxiv.org/abs/1112.4647 ) and I just now read Stenger's reply: https://arxiv.org/pdf/1202.4359.pdf

I think they both make some valid points. It may be that many parameters we believe are fine tuned will turn out to have other explanations. But I also think in domains where we do have understandings, such as in computational models (such as algorithmic information thery: what is the shortest program that produces X), or in the set of all possible cellular automata that only consider the states of adjacent cells, the number that are interesting (neither too simple nor too chaotic) is a small fraction of the total. So there is probably fine tuning, but it is, as you mention, extremely hard to quantify.
 

But my general criticism of fine-tuning is two-fold.  First, the concept is not well defined.  There is no apriori probability distribution over possible values.  If the possible values are infinite, then any realized value is improbable.  Fine tuning is all in the intuition.  Charts are drawn showing little "we are here" zones to prove the fine tuning.  But the scales are sometimes linear, sometimes logarithmic.  And why those parameters and not the square?...or the square root?  Bayesian inference is not invariant under change of parameters.

At least for the cosmological constant, there seems to be some understanding of its probability distribution, and it is relatively independent of the other parameters in that it is unrelated to nucleosynthesis, chemistry, etc. Therefore it is our best candidate to consider in isolation from the other parameters in the high-dimensional space.
 

Second, calling it "fine-tuning" implies some kind of process of "tuning" or "selection".  But that's gratuitous.  Absent supernatural miracles, we must find ourselves in a universe in which we are nomologically possible.  And that is true whether there is one universe or infinitely many.  So it cannot be evidence one way or the other for the number of universes.

Let's say we did have an understanding of the distribution of possible universes and the fraction of which supported conscious life. If we discover the fraction to be 1 in 1,000,000 would this not motivate a belief in there being more than one universe?

No, because it is equally evidence that one universe (this one) was realized out of the ensemble.  You are relying on an intuition that it is easier to explain why all 1,000,000 exist than to explain why this one exists.  But that's an intuition about explaining things, not about any objective probability.  Every day things happen that are more improbable than a million-to-one. 


You need to take all the histories, which we know exists in arithmetic,

I don't know what "exists in arithmetic" has to do with existence.


then consciousness will differentiate on those histories which seems to be fine tuned. Like you say, we have to eliminate the selector, except for consciousness. 

 Until Everett no one thought it necessary to suppose all the counterfactuals happened "somewhere else”.


Well, there was Borgess of course, and the idea is present in the whole neoplatonism, arguably. Then, for any one who believes that 777 is odd independently of him/herself, all computations are run independently of anyone.

That's a non-sequitur.  One can try dividing 777 by 2.  One can't verify all computations are independently or dependently of anyone.

Brent

Jason Resch

unread,
Oct 20, 2020, 4:20:30 PM10/20/20
to Everything List
On Tue, Oct 20, 2020 at 1:23 PM 'Brent Meeker' via Everything List <everyth...@googlegroups.com> wrote:


On 10/20/2020 5:39 AM, Bruno Marchal wrote:

On 15 Oct 2020, at 20:56, 'Brent Meeker' via Everything List <everyth...@googlegroups.com> wrote:

You should have read Vic Stenger's "The Fallacy of Fine Tuning".  Vic points out how many examples of  fine tuning are mis-conceived...including Hoyle's prediction of an excited state of carbon.  Vic also points out the fallacy of just considering one parameter when the parameter space is high dimensional.

But my general criticism of fine-tuning is two-fold.  First, the concept is not well defined.  There is no apriori probability distribution over possible values.  If the possible values are infinite, then any realized value is improbable. 


I don’t think so. That is why Kolmogorov defines a measure space by forbidding infinite intersection of events. In the finite case the space of events is the complete boolean structure coming from the subset of the set of the possible results. In the infinite domain, the measure space os defined by a strict subset. I miss perhaps something, but the axiomatic of Kolmogorov has been invented to solve that “infinite number of value” problem.

That's a non-answer.  I was just using infinite (as physicists do) to mean bigger than anything we're thinking of.  Kolmogorov just shaped his definition to make the mathematics simpler.  There's nothing in Jason's analyses that defines the variables as finite.  Jason just helps jimself to an intuition that a value between 7.5 and 7.7 is "fine-tuned".  He didn't first justify the finite interval. 

I admit as much in the article. For most parameters, we don't understand the range or probability distribution for the constants. However, see my explanation for the cosmological constant, a value for which the theory can account for the expected range and probability distribution.

Jason

Jason Resch

unread,
Oct 20, 2020, 4:28:02 PM10/20/20
to Everything List
If you accept the independent truth of the equation "Y = 2X+1" in the case of  "Y=777" and an integer X, then you should likewise also accept the existence of all computations, as a consequence of the equation defined here: ftp://ftp.math.ethz.ch/hg/EMIS/journals/AMI/2003/jones.pdf 

Jason

Jason Resch

unread,
Oct 20, 2020, 4:36:11 PM10/20/20
to Everything List
Sorry, that was the incorrect link, it should have been: 



 
Jason

Brent Meeker

unread,
Oct 20, 2020, 5:37:20 PM10/20/20
to everyth...@googlegroups.com
Then how can you assert there is fine tuning.  Is a value of 20+1 qualify?  Does it matter whether the possible range was (0,100) or (19,21)?


However, see my explanation for the cosmological constant, a value for which the theory can account for the expected range and probability distribution.

That's right, there is a theory that tells us something about a range and probability distribution.  But it's far from an accepted theory, and might well be wrong.

Brent

Brent Meeker

unread,
Oct 20, 2020, 5:46:03 PM10/20/20
to everyth...@googlegroups.com
How can you be so casual about leaping from "This statement is true."  to "The relation it expresses entails that the relata exist."  "True" and "exist" are even different words.  "Watson is the companion of Holmes" is true in many logics (just note that it's negation is false) yet nobody thinks it makes Sherlock Holmes into a person who existed.  In mathematics, "exists" means has a value that satifies (makes true) and expression.  It says nothing about whether you can kick it and whether it kicks back.

Brent

as a consequence of the equation defined here: ftp://ftp.math.ethz.ch/hg/EMIS/journals/AMI/2003/jones.pdf 

Jason
--
You received this message because you are subscribed to the Google Groups "Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email to everything-li...@googlegroups.com.

PGC

unread,
Oct 23, 2020, 4:47:59 AM10/23/20
to Everything List
That's the pleasant feature of platonism. Abstract relations are descriptively effective, therefore realism concerning their premisses, axioms, and objects is justified. And that's a large platonically casual leap from propping up abstractions to hold descriptively to claiming reality; it's more like a linguistic mirage than arithmetical, which imho counts as evidence that mathematics is something language performs rather than the opposite. As the comics posted here occasionally point out for fun, who has ever met number 5? PGC    

Bruno Marchal

unread,
Oct 23, 2020, 6:49:04 AM10/23/20
to everyth...@googlegroups.com
On 20 Oct 2020, at 20:22, 'Brent Meeker' via Everything List <everyth...@googlegroups.com> wrote:



On 10/20/2020 5:39 AM, Bruno Marchal wrote:

On 15 Oct 2020, at 20:56, 'Brent Meeker' via Everything List <everyth...@googlegroups.com> wrote:

You should have read Vic Stenger's "The Fallacy of Fine Tuning".  Vic points out how many examples of  fine tuning are mis-conceived...including Hoyle's prediction of an excited state of carbon.  Vic also points out the fallacy of just considering one parameter when the parameter space is high dimensional.

But my general criticism of fine-tuning is two-fold.  First, the concept is not well defined.  There is no apriori probability distribution over possible values.  If the possible values are infinite, then any realized value is improbable. 


I don’t think so. That is why Kolmogorov defines a measure space by forbidding infinite intersection of events. In the finite case the space of events is the complete boolean structure coming from the subset of the set of the possible results. In the infinite domain, the measure space os defined by a strict subset. I miss perhaps something, but the axiomatic of Kolmogorov has been invented to solve that “infinite number of value” problem.

That's a non-answer.  I was just using infinite (as physicists do) to mean bigger than anything we're thinking of.  Kolmogorov just shaped his definition to make the mathematics simpler.  There's nothing in Jason's analyses that defines the variables as finite.  Jason just helps jimself to an intuition that a value between 7.5 and 7.7 is "fine-tuned".  He didn't first justify the finite interval. 

That deserves certainly more analyses. The idea is to run the laws in theory, or in a simulation of a computer, and if we see that, say, life cannot happens when the value of some constant is out of small interval, like (7, 8), say, and that the known constant is 7.3, we can say that there is some”fine tuning” (being neutral if that comes from a designer or from a many-worlds). Now, “small” is not a precise term, but it can be made more precise in diverse ways. We can talk of r-fine Turing, with r measuring some degree of “smallness”. What I mean is that although “fine Tuning” is informal and not precise, the idea can have some merit.
According to some astophysicists, without Jupiter, it is unclear if life could have evolved. It is unlikely also, for some, that lofe would have evolved without the previous extinctions, inlacing one due to a supernova, those due to asteroids impact, and there is a sense to say (from our first person plural perspective) that the solar system, and its history, was “fine Tuned” for life to happen and evolved to human consciousness, and that might be a beginning of the solution of Fermi paradox, but of course, it only means that our “history” is a rare sequence of events… 





But I do agree that fine-tuning is not always well defined and sometimes misused. Yet I agree that the choice is between a fine tuner (but who is it, and how does it the selection. Even if real, a fine tuner explains nothing without some explanation of where the fine tuner comes from. In a multiverse or milti-computations (le the sigma_1 arithmetic) consciousness is the fine tuner, and that one is explained already by the (Löbianà universal machine.





Fine tuning is all in the intuition.  Charts are drawn showing little "we are here" zones to prove the fine tuning.  But the scales are sometimes linear, sometimes logarithmic.  And why those parameters and not the square?...or the square root?  Bayesian inference is not invariant under change of parameters.

That depends on your OMEGA in the probability space, and the measure you put on the set of events.

Exactly so.

OK. So I think we mainly agree on all this. I think Jason agrees implicitly that finding the “OMEGA” for the physical constants (assuming that exists) is a difficult task. The proof that this is impossible is rather difficult too.







Second, calling it "fine-tuning" implies some kind of process of "tuning" or "selection".  But that's gratuitous. 

Yes. Ad Hoc, and it hides the problem by a bigger problem,  instead of solving it. 



Absent supernatural miracles, we must find ourselves in a universe in which we are nomologically possible.

That will be the relative histories with measure near one. Sort of history-neighbourhoods.

By what measure?

The measure is given by the relevant semantics of the (first person) material modes, and for the case of ([]p & p) and ([]p & <>t & p), I have discovered recently that the measure is given by a Lebesgue measure on the set sigma_1 = union on a of all sigma_1(a) (the union of what is sigma_1 computable in the oracle a, for all oracle). That gives nice borelian with a sigma-additive measure). That comes directly from the first person indeterminacy on all program + all oracles.
Such a measure can be shown to exist in some reasonable extension of ZFC. (ZFC + projective determinacy).

Of course, we know empirically that such measure exists (very plausibly), and with Mechanism, the fact that we get a quantum-like Lebesgue measure is a confirmation (not a proof of course) of digital mechanism. 

Bruno


Bruno Marchal

unread,
Oct 23, 2020, 6:49:05 AM10/23/20
to everyth...@googlegroups.com
On 20 Oct 2020, at 20:26, 'Brent Meeker' via Everything List <everyth...@googlegroups.com> wrote:



On 10/20/2020 5:44 AM, Bruno Marchal wrote:

On 15 Oct 2020, at 22:53, 'Brent Meeker' via Everything List <everyth...@googlegroups.com> wrote:



On 10/15/2020 12:46 PM, Jason Resch wrote:


On Thu, Oct 15, 2020 at 1:56 PM 'Brent Meeker' via Everything List <everyth...@googlegroups.com> wrote:
You should have read Vic Stenger's "The Fallacy of Fine Tuning".  Vic points out how many examples of  fine tuning are mis-conceived...including Hoyle's prediction of an excited state of carbon.  Vic also points out the fallacy of just considering one parameter when the parameter space is high dimensional.

Hi Brent,

Thanks for the suggestions. I did read Barnes's critique of TFOFT ( https://arxiv.org/abs/1112.4647 ) and I just now read Stenger's reply: https://arxiv.org/pdf/1202.4359.pdf

I think they both make some valid points. It may be that many parameters we believe are fine tuned will turn out to have other explanations. But I also think in domains where we do have understandings, such as in computational models (such as algorithmic information thery: what is the shortest program that produces X), or in the set of all possible cellular automata that only consider the states of adjacent cells, the number that are interesting (neither too simple nor too chaotic) is a small fraction of the total. So there is probably fine tuning, but it is, as you mention, extremely hard to quantify.
 

But my general criticism of fine-tuning is two-fold.  First, the concept is not well defined.  There is no apriori probability distribution over possible values.  If the possible values are infinite, then any realized value is improbable.  Fine tuning is all in the intuition.  Charts are drawn showing little "we are here" zones to prove the fine tuning.  But the scales are sometimes linear, sometimes logarithmic.  And why those parameters and not the square?...or the square root?  Bayesian inference is not invariant under change of parameters.

At least for the cosmological constant, there seems to be some understanding of its probability distribution, and it is relatively independent of the other parameters in that it is unrelated to nucleosynthesis, chemistry, etc. Therefore it is our best candidate to consider in isolation from the other parameters in the high-dimensional space.
 

Second, calling it "fine-tuning" implies some kind of process of "tuning" or "selection".  But that's gratuitous.  Absent supernatural miracles, we must find ourselves in a universe in which we are nomologically possible.  And that is true whether there is one universe or infinitely many.  So it cannot be evidence one way or the other for the number of universes.

Let's say we did have an understanding of the distribution of possible universes and the fraction of which supported conscious life. If we discover the fraction to be 1 in 1,000,000 would this not motivate a belief in there being more than one universe?

No, because it is equally evidence that one universe (this one) was realized out of the ensemble.  You are relying on an intuition that it is easier to explain why all 1,000,000 exist than to explain why this one exists.  But that's an intuition about explaining things, not about any objective probability.  Every day things happen that are more improbable than a million-to-one. 


You need to take all the histories, which we know exists in arithmetic,

I don't know what "exists in arithmetic" has to do with existence.

We cannot prove the existence of any universal machinery without assuming at least one of them.

Any of them will do, so I can assume at the start the one most people are already familiar with: very elementary arithmetic (Robinson Arithmetic). So I assume 0, s(0), s(s(0)), etc. 

By exist (fundamentally, or ontologically, or “really”) I mean the existence of 0, 1, 2, 3, … Only that exists ontologically.

By an observer I mean a number n such that phi_n is a Turing universal and Löbian function (or n is Turing Löbian machine, and I note “[]” its provability predicate. Incompleteness imposes to that machine/number to distinguish the 8 modes (that I have described many times, OK?). 
For each mode [i] we have a corresponding notion of phenomenological existence. With [0] = Gödel’s beweisbar ([]), and [i] = one of the seven remaining modes, for example [1]p = [0]p & p, [2]p = [0]p & <0>t & p, etc. (p always sigma_1).

Psychological existence can then be defined by [1](Ex [1] P(x), physical existence is something like [2]<2>(Ex [2]<2> P(x), etc.

There is only one reality (the “standard model of arithmetic”), but with many internal modes which are the modes corresponding to the modal variant of “[]”, which are imposed to incompleteness. Both psychology and physics are given by different modes of view, on the same (arithmetical) reality.

Again, I could use any universal machinery, they all gives the same 8 modes. 

Bruno













then consciousness will differentiate on those histories which seems to be fine tuned. Like you say, we have to eliminate the selector, except for consciousness. 

 Until Everett no one thought it necessary to suppose all the counterfactuals happened "somewhere else”.


Well, there was Borgess of course, and the idea is present in the whole neoplatonism, arguably. Then, for any one who believes that 777 is odd independently of him/herself, all computations are run independently of anyone.

That's a non-sequitur.  One can try dividing 777 by 2.  One can't verify all computations are independently or dependently of anyone.

Brent

--
You received this message because you are subscribed to the Google Groups "Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email to everything-li...@googlegroups.com.

Jason Resch

unread,
Oct 23, 2020, 11:15:56 AM10/23/20
to Everything List
It comes out of QFT, perhaps our most strongly tested theory in science, at least one that offers the most accurate verified prediction in physics. It might well be wrong, but that would be more surprising to me than the idea of an anthropic selection process operating in a multiverse.

Jason

Jason Resch

unread,
Oct 23, 2020, 11:22:05 AM10/23/20
to Everything List
On Tue, Oct 20, 2020 at 4:46 PM 'Brent Meeker' via Everything List <everyth...@googlegroups.com> wrote:


On 10/20/2020 1:27 PM, Jason Resch wrote:


On Tue, Oct 20, 2020 at 1:26 PM 'Brent Meeker' via Everything List <everyth...@googlegroups.com> wrote:


On 10/20/2020 5:44 AM, Bruno Marchal wrote:

On 15 Oct 2020, at 22:53, 'Brent Meeker' via Everything List <everyth...@googlegroups.com> wrote:



On 10/15/2020 12:46 PM, Jason Resch wrote:


On Thu, Oct 15, 2020 at 1:56 PM 'Brent Meeker' via Everything List <everyth...@googlegroups.com> wrote:
You should have read Vic Stenger's "The Fallacy of Fine Tuning".  Vic points out how many examples of  fine tuning are mis-conceived...including Hoyle's prediction of an excited state of carbon.  Vic also points out the fallacy of just considering one parameter when the parameter space is high dimensional.

Hi Brent,

Thanks for the suggestions. I did read Barnes's critique of TFOFT ( https://arxiv.org/abs/1112.4647 ) and I just now read Stenger's reply: https://arxiv.org/pdf/1202.4359.pdf

I think they both make some valid points. It may be that many parameters we believe are fine tuned will turn out to have other explanations. But I also think in domains where we do have understandings, such as in computational models (such as algorithmic information thery: what is the shortest program that produces X), or in the set of all possible cellular automata that only consider the states of adjacent cells, the number that are interesting (neither too simple nor too chaotic) is a small fraction of the total. So there is probably fine tuning, but it is, as you mention, extremely hard to quantify.
 

But my general criticism of fine-tuning is two-fold.  First, the concept is not well defined.  There is no apriori probability distribution over possible values.  If the possible values are infinite, then any realized value is improbable.  Fine tuning is all in the intuition.  Charts are drawn showing little "we are here" zones to prove the fine tuning.  But the scales are sometimes linear, sometimes logarithmic.  And why those parameters and not the square?...or the square root?  Bayesian inference is not invariant under change of parameters.

At least for the cosmological constant, there seems to be some understanding of its probability distribution, and it is relatively independent of the other parameters in that it is unrelated to nucleosynthesis, chemistry, etc. Therefore it is our best candidate to consider in isolation from the other parameters in the high-dimensional space.
 

Second, calling it "fine-tuning" implies some kind of process of "tuning" or "selection".  But that's gratuitous.  Absent supernatural miracles, we must find ourselves in a universe in which we are nomologically possible.  And that is true whether there is one universe or infinitely many.  So it cannot be evidence one way or the other for the number of universes.

Let's say we did have an understanding of the distribution of possible universes and the fraction of which supported conscious life. If we discover the fraction to be 1 in 1,000,000 would this not motivate a belief in there being more than one universe?

No, because it is equally evidence that one universe (this one) was realized out of the ensemble.  You are relying on an intuition that it is easier to explain why all 1,000,000 exist than to explain why this one exists.  But that's an intuition about explaining things, not about any objective probability.  Every day things happen that are more improbable than a million-to-one. 


You need to take all the histories, which we know exists in arithmetic,

I don't know what "exists in arithmetic" has to do with existence.

then consciousness will differentiate on those histories which seems to be fine tuned. Like you say, we have to eliminate the selector, except for consciousness. 

 Until Everett no one thought it necessary to suppose all the counterfactuals happened "somewhere else”.


Well, there was Borgess of course, and the idea is present in the whole neoplatonism, arguably. Then, for any one who believes that 777 is odd independently of him/herself, all computations are run independently of anyone.

That's a non-sequitur.  One can try dividing 777 by 2.  One can't verify all computations are independently or dependently of anyone.

If you accept the independent truth of the equation "Y = 2X+1" in the case of  "Y=777" and an integer X, then you should likewise also accept the existence of all computations,

How can you be so casual about leaping from "This statement is true."  to "The relation it expresses entails that the relata exist."  "True" and "exist" are even different words. 

We've argued this countless times before, so I don't want to repeat it again. The truth that 777 is odd implies the existence of an integer X, which is 1 more than 777 divided by 2.  Truth has ontological implications and consequences when they relate to the existence or non-existence of other entities.

 
"Watson is the companion of Holmes" is true in many logics (just note that it's negation is false) yet nobody thinks it makes Sherlock Holmes into a person who existed.

What reality are you applying the word "exists" within? You never specified it, which makes any answer regarding the existence or non-existence of Watson ambiguous.
 
  In mathematics, "exists" means has a value that satifies (makes true) and expression.  It says nothing about whether you can kick it and whether it kicks back.


Kicking back occurs from the perspective of entities existing who live within long computational histories which occur in platonically existing computational threads, which exist if you assume arithmetical truth.

Jason

 
Brent

as a consequence of the equation defined here: ftp://ftp.math.ethz.ch/hg/EMIS/journals/AMI/2003/jones.pdf 

Jason
--
You received this message because you are subscribed to the Google Groups "Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email to everything-li...@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/everything-list/CA%2BBCJUhQXt6v60%2Bdqw0e%2BjjwKZLJG%2BTR-%2BY-7w96r6kK%3DABZnw%40mail.gmail.com.

--
You received this message because you are subscribed to the Google Groups "Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email to everything-li...@googlegroups.com.

Brent Meeker

unread,
Oct 23, 2020, 5:20:10 PM10/23/20
to everyth...@googlegroups.com


On 10/23/2020 3:48 AM, Bruno Marchal wrote:

On 20 Oct 2020, at 20:26, 'Brent Meeker' via Everything List <everyth...@googlegroups.com> wrote:



On 10/20/2020 5:44 AM, Bruno Marchal wrote:

On 15 Oct 2020, at 22:53, 'Brent Meeker' via Everything List <everyth...@googlegroups.com> wrote:



On 10/15/2020 12:46 PM, Jason Resch wrote:


On Thu, Oct 15, 2020 at 1:56 PM 'Brent Meeker' via Everything List <everyth...@googlegroups.com> wrote:
You should have read Vic Stenger's "The Fallacy of Fine Tuning".  Vic points out how many examples of  fine tuning are mis-conceived...including Hoyle's prediction of an excited state of carbon.  Vic also points out the fallacy of just considering one parameter when the parameter space is high dimensional.

Hi Brent,

Thanks for the suggestions. I did read Barnes's critique of TFOFT ( https://arxiv.org/abs/1112.4647 ) and I just now read Stenger's reply: https://arxiv.org/pdf/1202.4359.pdf

I think they both make some valid points. It may be that many parameters we believe are fine tuned will turn out to have other explanations. But I also think in domains where we do have understandings, such as in computational models (such as algorithmic information thery: what is the shortest program that produces X), or in the set of all possible cellular automata that only consider the states of adjacent cells, the number that are interesting (neither too simple nor too chaotic) is a small fraction of the total. So there is probably fine tuning, but it is, as you mention, extremely hard to quantify.
 

But my general criticism of fine-tuning is two-fold.  First, the concept is not well defined.  There is no apriori probability distribution over possible values.  If the possible values are infinite, then any realized value is improbable.  Fine tuning is all in the intuition.  Charts are drawn showing little "we are here" zones to prove the fine tuning.  But the scales are sometimes linear, sometimes logarithmic.  And why those parameters and not the square?...or the square root?  Bayesian inference is not invariant under change of parameters.

At least for the cosmological constant, there seems to be some understanding of its probability distribution, and it is relatively independent of the other parameters in that it is unrelated to nucleosynthesis, chemistry, etc. Therefore it is our best candidate to consider in isolation from the other parameters in the high-dimensional space.
 

Second, calling it "fine-tuning" implies some kind of process of "tuning" or "selection".  But that's gratuitous.  Absent supernatural miracles, we must find ourselves in a universe in which we are nomologically possible.  And that is true whether there is one universe or infinitely many.  So it cannot be evidence one way or the other for the number of universes.

Let's say we did have an understanding of the distribution of possible universes and the fraction of which supported conscious life. If we discover the fraction to be 1 in 1,000,000 would this not motivate a belief in there being more than one universe?

No, because it is equally evidence that one universe (this one) was realized out of the ensemble.  You are relying on an intuition that it is easier to explain why all 1,000,000 exist than to explain why this one exists.  But that's an intuition about explaining things, not about any objective probability.  Every day things happen that are more improbable than a million-to-one. 


You need to take all the histories, which we know exists in arithmetic,

I don't know what "exists in arithmetic" has to do with existence.

We cannot prove the existence of any universal machinery without assuming at least one of them.

Which is the same as saying the "proof" means nothing since it is the same assumption.



Any of them will do, so I can assume at the start the one most people are already familiar with: very elementary arithmetic (Robinson Arithmetic). So I assume 0, s(0), s(s(0)), etc. 

By exist (fundamentally, or ontologically, or “really”) I mean the existence of 0, 1, 2, 3, … Only that exists ontologically.

By an observer I mean a number n such that phi_n is a Turing universal and Löbian function (or n is Turing Löbian machine, and I note “[]” its provability predicate. Incompleteness imposes to that machine/number to distinguish the 8 modes (that I have described many times, OK?). 
For each mode [i] we have a corresponding notion of phenomenological existence. With [0] = Gödel’s beweisbar ([]), and [i] = one of the seven remaining modes, for example [1]p = [0]p & p, [2]p = [0]p & <0>t & p, etc. (p always sigma_1).

Psychological existence can then be defined by [1](Ex [1] P(x), physical existence is something like [2]<2>(Ex [2]<2> P(x), etc.

There is only one reality (the “standard model of arithmetic”), but with many internal modes which are the modes corresponding to the modal variant of “[]”, which are imposed to incompleteness. Both psychology and physics are given by different modes of view, on the same (arithmetical) reality.

Again, I could use any universal machinery, they all gives the same 8 modes.

It's all very well to hypothesize a theory.  But then you must show that it agrees with experience and tells us something we didn't know.

Brent

Brent Meeker

unread,
Oct 23, 2020, 5:54:03 PM10/23/20
to everyth...@googlegroups.com
That "comes out of" is very misleading, since it's applying QFT to general relativity which is not even a quantum theory.  The first application of QFT to the problem gave the wrong answer by 120 orders of magnitude.  I don't know what prediction you're referring to, there have been several.  Can you cite the paper?

Brent

It might well be wrong, but that would be more surprising to me than the idea of an anthropic selection process operating in a multiverse.

Jason
--
You received this message because you are subscribed to the Google Groups "Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email to everything-li...@googlegroups.com.

Brent Meeker

unread,
Oct 23, 2020, 6:00:52 PM10/23/20
to everyth...@googlegroups.com
Exactly the problem.  If I say "In the world of Conan Doyle's novels" then it's true.  In what world is 777 odd?  In the world of arithmetic.  In this world...it depends.  In most interpretations it's true in this world.  But it doesn't follow that all the other, infinitely many, inferences true in arithmetic are likely true in this world.

Brent


 
  In mathematics, "exists" means has a value that satifies (makes true) and expression.  It says nothing about whether you can kick it and whether it kicks back.


Kicking back occurs from the perspective of entities existing who live within long computational histories which occur in platonically existing computational threads, which exist if you assume arithmetical truth.

But that's assuming the thing you're trying to argue, that the world is nothing but computations in arithmetic and is all such computations. 

Brent

Jason Resch

unread,
Oct 23, 2020, 6:52:50 PM10/23/20
to Everything List
On Fri, Oct 23, 2020 at 4:54 PM 'Brent Meeker' via Everything List <everyth...@googlegroups.com> wrote:


On 10/23/2020 8:15 AM, Jason Resch wrote:


On Tue, Oct 20, 2020 at 4:37 PM 'Brent Meeker' via Everything List <everyth...@googlegroups.com> wrote:


On 10/20/2020 1:20 PM, Jason Resch wrote:


On Tue, Oct 20, 2020 at 1:23 PM 'Brent Meeker' via Everything List <everyth...@googlegroups.com> wrote:


On 10/20/2020 5:39 AM, Bruno Marchal wrote:

On 15 Oct 2020, at 20:56, 'Brent Meeker' via Everything List <everyth...@googlegroups.com> wrote:

You should have read Vic Stenger's "The Fallacy of Fine Tuning".  Vic points out how many examples of  fine tuning are mis-conceived...including Hoyle's prediction of an excited state of carbon.  Vic also points out the fallacy of just considering one parameter when the parameter space is high dimensional.

But my general criticism of fine-tuning is two-fold.  First, the concept is not well defined.  There is no apriori probability distribution over possible values.  If the possible values are infinite, then any realized value is improbable. 


I don’t think so. That is why Kolmogorov defines a measure space by forbidding infinite intersection of events. In the finite case the space of events is the complete boolean structure coming from the subset of the set of the possible results. In the infinite domain, the measure space os defined by a strict subset. I miss perhaps something, but the axiomatic of Kolmogorov has been invented to solve that “infinite number of value” problem.

That's a non-answer.  I was just using infinite (as physicists do) to mean bigger than anything we're thinking of.  Kolmogorov just shaped his definition to make the mathematics simpler.  There's nothing in Jason's analyses that defines the variables as finite.  Jason just helps jimself to an intuition that a value between 7.5 and 7.7 is "fine-tuned".  He didn't first justify the finite interval. 

I admit as much in the article. For most parameters, we don't understand the range or probability distribution for the constants.
 
Then how can you assert there is fine tuning.  Is a value of 20+1 qualify?  Does it matter whether the possible range was (0,100) or (19,21)?

However, see my explanation for the cosmological constant, a value for which the theory can account for the expected range and probability distribution.

That's right, there is a theory that tells us something about a range and probability distribution.  But it's far from an accepted theory, and might well be wrong.

It comes out of QFT, perhaps our most strongly tested theory in science, at least one that offers the most accurate verified prediction in physics.

That "comes out of" is very misleading, since it's applying QFT to general relativity which is not even a quantum theory. 

But the quantum fields (vacuum) are known to gravitate.
 
The first application of QFT to the problem gave the wrong answer by 120 orders of magnitude. 

Wrong is the wrong word here. The answer was unexpectedly small by that many orders of magnitude, but it is still within the range of possibility.
 
I don't know what prediction you're referring to, there have been several.  Can you cite the paper?

The prediction that the vacuum state contains energy, and that this energy under QFT is the sum of each of the field energies, some of which may be positive or negative, and when they are summed, they come out to be 120 orders of magnitude smaller than the Planck energy (which is the expected energy level of each field). I don't know of a reference to the paper, but I've read it was first calculated by Feynman and Wheeler. I also found this derivation: https://i.imgur.com/m0QhWOv.png


This paper gives three citations [6-8] to accompany this statement, which might also be useful to you:

"Nature contains two relative mass scales: the vacuum energy density V ∼ (10−30MPl) 4 and the weak scale v 2 ∼ (10−17MPl) 2 where v is the Higgs vacuum expectation value. Their smallness with respect to the Planck scale MPl = 1.2 1019 GeV is not understood and is considered as ‘unnatural’ in relativistic quantum field theory, because it seems to require precise cancellations among much larger contributions. If these cancellations happen for no fundamental reason, they are ‘unlikely’, in the sense that summing random order one numbers gives 10^−120 with a ‘probability’ of about 10^−120."

"No natural theoretical alternatives are known (for example, supergravity does not select V = 0 as a special point [1]), and anthropic selection of the cosmological constant seems possible in theories with some tens of scalars such that their potential has more than 10^120 different vacua, which get ‘populated’ forming a ‘multiverse’ through eternal inflation. String theory could realise this scenario [6–8]."
 
[6] R. Bousso, J. Polchinski, “Quantization of four form fluxes and dynamical neutralization of the cosmological constant”, JHEP 0006 (2000) 006 [arXiv:hep-th/0004134].
[7] L. Susskind, “The Anthropic landscape of string theory” [arXiv:hep-th/0302219].
[8] S. Kachru, R. Kallosh, A.D. Linde, S.P. Trivedi, “De Sitter vacua in string theory”, Phys. Rev. D68 (2003) 046005 [arXiv:hep-th/0301240].


Jason


Brent

It might well be wrong, but that would be more surprising to me than the idea of an anthropic selection process operating in a multiverse.

Jason
--
You received this message because you are subscribed to the Google Groups "Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email to everything-li...@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/everything-list/CA%2BBCJUiJ5c3kakLGC73zuRGX-6gafk0W7NGhuJJn9VOQEtzriA%40mail.gmail.com.

--
You received this message because you are subscribed to the Google Groups "Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email to everything-li...@googlegroups.com.

Brent Meeker

unread,
Oct 23, 2020, 10:24:39 PM10/23/20
to everyth...@googlegroups.com


On 10/23/2020 3:52 PM, Jason Resch wrote:


On Fri, Oct 23, 2020 at 4:54 PM 'Brent Meeker' via Everything List <everyth...@googlegroups.com> wrote:


On 10/23/2020 8:15 AM, Jason Resch wrote:


On Tue, Oct 20, 2020 at 4:37 PM 'Brent Meeker' via Everything List <everyth...@googlegroups.com> wrote:


On 10/20/2020 1:20 PM, Jason Resch wrote:


On Tue, Oct 20, 2020 at 1:23 PM 'Brent Meeker' via Everything List <everyth...@googlegroups.com> wrote:


On 10/20/2020 5:39 AM, Bruno Marchal wrote:

On 15 Oct 2020, at 20:56, 'Brent Meeker' via Everything List <everyth...@googlegroups.com> wrote:

You should have read Vic Stenger's "The Fallacy of Fine Tuning".  Vic points out how many examples of  fine tuning are mis-conceived...including Hoyle's prediction of an excited state of carbon.  Vic also points out the fallacy of just considering one parameter when the parameter space is high dimensional.

But my general criticism of fine-tuning is two-fold.  First, the concept is not well defined.  There is no apriori probability distribution over possible values.  If the possible values are infinite, then any realized value is improbable. 


I don’t think so. That is why Kolmogorov defines a measure space by forbidding infinite intersection of events. In the finite case the space of events is the complete boolean structure coming from the subset of the set of the possible results. In the infinite domain, the measure space os defined by a strict subset. I miss perhaps something, but the axiomatic of Kolmogorov has been invented to solve that “infinite number of value” problem.

That's a non-answer.  I was just using infinite (as physicists do) to mean bigger than anything we're thinking of.  Kolmogorov just shaped his definition to make the mathematics simpler.  There's nothing in Jason's analyses that defines the variables as finite.  Jason just helps jimself to an intuition that a value between 7.5 and 7.7 is "fine-tuned".  He didn't first justify the finite interval. 

I admit as much in the article. For most parameters, we don't understand the range or probability distribution for the constants.
 
Then how can you assert there is fine tuning.  Is a value of 20+1 qualify?  Does it matter whether the possible range was (0,100) or (19,21)?

However, see my explanation for the cosmological constant, a value for which the theory can account for the expected range and probability distribution.

That's right, there is a theory that tells us something about a range and probability distribution.  But it's far from an accepted theory, and might well be wrong.

It comes out of QFT, perhaps our most strongly tested theory in science, at least one that offers the most accurate verified prediction in physics.

That "comes out of" is very misleading, since it's applying QFT to general relativity which is not even a quantum theory. 

But the quantum fields (vacuum) are known to gravitate.

"Known" how?  You can write down a calculation...which give infinity as an answer.   Having arrived at an obviously wrong answer, you can introduce a cutoff that you guess at based on some dimensional analysis and get an answer that's wrong by 120 orders of magnitude, instead of infinitely.  And you then say this shows we know something like this must be right???


 
The first application of QFT to the problem gave the wrong answer by 120 orders of magnitude. 

Wrong is the wrong word here. The answer was unexpectedly small by that many orders of magnitude, but it is still within the range of possibility.

Which is exactly what's wrong with the idea of "fine-tuning".  The "range of possibility" is just pulled out of thin air.  Suppose life were possible for 1e-60 ev/m3  to 1e-20 ev/m3.  Would that be "fine-tuning"  because (1-e-20 - 1e-60)<<1  or because 30 orders of magnitude is small compared to infinity.


 
I don't know what prediction you're referring to, there have been several.  Can you cite the paper?

The prediction that the vacuum state contains energy, and that this energy under QFT is the sum of each of the field energies, some of which may be positive or negative, and when they are summed, they come out to be 120 orders of magnitude smaller than the Planck energy (which is the expected energy level of each field). I don't know of a reference to the paper, but I've read it was first calculated by Feynman and Wheeler. I also found this derivation: https://i.imgur.com/m0QhWOv.png


This paper gives three citations [6-8] to accompany this statement, which might also be useful to you:

"Nature contains two relative mass scales: the vacuum energy density V ∼ (10−30MPl) 4 and the weak scale v 2 ∼ (10−17MPl) 2 where v is the Higgs vacuum expectation value. Their smallness with respect to the Planck scale MPl = 1.2 1019 GeV is not understood and is considered as ‘unnatural’ in relativistic quantum field theory, because it seems to require precise cancellations among much larger contributions. If these cancellations happen for no fundamental reason, they are ‘unlikely’, in the sense that summing random order one numbers gives 10^−120 with a ‘probability’ of about 10^−120."

But who says the random number are order 1.

It's all just fantasizing.

Brent


"No natural theoretical alternatives are known (for example, supergravity does not select V = 0 as a special point [1]), and anthropic selection of the cosmological constant seems possible in theories with some tens of scalars such that their potential has more than 10^120 different vacua, which get ‘populated’ forming a ‘multiverse’ through eternal inflation. String theory could realise this scenario [6–8]."
 
[6] R. Bousso, J. Polchinski, “Quantization of four form fluxes and dynamical neutralization of the cosmological constant”, JHEP 0006 (2000) 006 [arXiv:hep-th/0004134].
[7] L. Susskind, “The Anthropic landscape of string theory” [arXiv:hep-th/0302219].
[8] S. Kachru, R. Kallosh, A.D. Linde, S.P. Trivedi, “De Sitter vacua in string theory”, Phys. Rev. D68 (2003) 046005 [arXiv:hep-th/0301240].


Jason


Brent

It might well be wrong, but that would be more surprising to me than the idea of an anthropic selection process operating in a multiverse.

Jason
--
You received this message because you are subscribed to the Google Groups "Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email to everything-li...@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/everything-list/CA%2BBCJUiJ5c3kakLGC73zuRGX-6gafk0W7NGhuJJn9VOQEtzriA%40mail.gmail.com.

--
You received this message because you are subscribed to the Google Groups "Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email to everything-li...@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/everything-list/a3039f54-142f-9558-984d-cfa5f65d56a0%40verizon.net.
--
You received this message because you are subscribed to the Google Groups "Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email to everything-li...@googlegroups.com.

spudb...@aol.com

unread,
Oct 23, 2020, 11:21:40 PM10/23/20
to jason...@gmail.com, everyth...@googlegroups.com
The only issue I ever had (and I love good sci fi) is Everett's position (and Bryce DeWitt's, John Wheeler) that universes spawn on the pop of a decision. Perhaps, I was wondering if it was something more cosmological, such as a supernova or a black hole, being both the initiator and modifier of such events? A conscious observer (defined as sensorially self aware) could be the only thing that matters, so, a machine intel, which has part of it's network imitating spindle cells found in the brains higher mammals may suffice? Despite this, biologists have for some years been surprised at the intelligence of birds such as crows. In this case a crow might suffice for cosmos splitting, and we could thus, conclude, that, indeed, bird is the word. 


Lawrence Crowell

unread,
Oct 24, 2020, 6:52:22 AM10/24/20
to Everything List
The Bousso-Polchinski paper is the main development with computing YM gauge fluxes through branes wrapped on Calabi-Yau compactified spaces. It has been a long time since I have read this paper to be honest. It though looks at cases of generic topologies of CY manifolds and these gauge fluxes. It is a curious reading, for a lot of this is with Gauss-law type calculations learned as an undergraduate in EM classes. 

The problem with all of these string and M-theoretic work is that it really works on AdS vacua, not well on dS (deSitter) vacua. The AdS has negative vacuum energy or cosmological constant, which means it can accommodate the type-0 bosonic string negative vacuum energy. In dS based superstring theory, even if one sets the theory with positive vacuum energy there is then an obstruction to STU transformations between string types, or at least to the bosonic string. This is one aspect to the Vafa swampland hypothesis, where the positive energy of the dS vacuum is a sort of breaking mechanism on string/M-theory.

I have worked on looking at the problem with the quantum vacuum in black hole coalescence. The quantum hair on the black holes is in a Casimir configuration that can excite the production of particle fields and gravitons. The spacetime is AdS-like, where string/M-theoretic structure might re-emerge. My paper  https://www.mdpi.com/657568 and https://arxiv.org/abs/2007.01106 then suggests that in a deformed AdS_4 spacetime or quantum vacuum, equivalently to CFT_3, the N = 2 SUSY and supergravity may hold. Signatures of this may exist in BMS symmetries or charges induced on a gravitational wave interferometric detector. 

How this fits into the STU structure of AdS_5 and the “broken world” of dS swampland is a region to study. The dS_4 spacetime of the observable universe may be a holographic screen on AdS_5, either the conformal boundary or a junction condition between causal wedges. It is tempting to think this is CFT_4 in the Maldacena AdS/CFT correspondence, but it is hard to know how to build a graviton from this. The Weinberg-Witten theorem puts some no-go limits on building gravitons from lower spin fields. 

LC

Jason Resch

unread,
Oct 24, 2020, 7:49:08 AM10/24/20
to Everything List
You implicitly assume a certain reality too. We all must.

What's the problem so long as the assumption is stated up front?

If we can show that an assumed theory is compatible with our observations, and if we can achieve that for a theory with fewer total assumptions, that is progress.

Jason

Jason Resch

unread,
Oct 24, 2020, 8:30:07 AM10/24/20
to Everything List
On Fri, Oct 23, 2020 at 9:24 PM 'Brent Meeker' via Everything List <everyth...@googlegroups.com> wrote:


On 10/23/2020 3:52 PM, Jason Resch wrote:


On Fri, Oct 23, 2020 at 4:54 PM 'Brent Meeker' via Everything List <everyth...@googlegroups.com> wrote:


On 10/23/2020 8:15 AM, Jason Resch wrote:


On Tue, Oct 20, 2020 at 4:37 PM 'Brent Meeker' via Everything List <everyth...@googlegroups.com> wrote:


On 10/20/2020 1:20 PM, Jason Resch wrote:


On Tue, Oct 20, 2020 at 1:23 PM 'Brent Meeker' via Everything List <everyth...@googlegroups.com> wrote:


On 10/20/2020 5:39 AM, Bruno Marchal wrote:

On 15 Oct 2020, at 20:56, 'Brent Meeker' via Everything List <everyth...@googlegroups.com> wrote:

You should have read Vic Stenger's "The Fallacy of Fine Tuning".  Vic points out how many examples of  fine tuning are mis-conceived...including Hoyle's prediction of an excited state of carbon.  Vic also points out the fallacy of just considering one parameter when the parameter space is high dimensional.

But my general criticism of fine-tuning is two-fold.  First, the concept is not well defined.  There is no apriori probability distribution over possible values.  If the possible values are infinite, then any realized value is improbable. 


I don’t think so. That is why Kolmogorov defines a measure space by forbidding infinite intersection of events. In the finite case the space of events is the complete boolean structure coming from the subset of the set of the possible results. In the infinite domain, the measure space os defined by a strict subset. I miss perhaps something, but the axiomatic of Kolmogorov has been invented to solve that “infinite number of value” problem.

That's a non-answer.  I was just using infinite (as physicists do) to mean bigger than anything we're thinking of.  Kolmogorov just shaped his definition to make the mathematics simpler.  There's nothing in Jason's analyses that defines the variables as finite.  Jason just helps jimself to an intuition that a value between 7.5 and 7.7 is "fine-tuned".  He didn't first justify the finite interval. 

I admit as much in the article. For most parameters, we don't understand the range or probability distribution for the constants.
 
Then how can you assert there is fine tuning.  Is a value of 20+1 qualify?  Does it matter whether the possible range was (0,100) or (19,21)?

However, see my explanation for the cosmological constant, a value for which the theory can account for the expected range and probability distribution.

That's right, there is a theory that tells us something about a range and probability distribution.  But it's far from an accepted theory, and might well be wrong.

It comes out of QFT, perhaps our most strongly tested theory in science, at least one that offers the most accurate verified prediction in physics.

That "comes out of" is very misleading, since it's applying QFT to general relativity which is not even a quantum theory. 

But the quantum fields (vacuum) are known to gravitate.

"Known" how?  You can write down a calculation...which give infinity as an answer. 

The Lamb shift, for instance, is an artifact of vacuum energy. The Lamb shift changes the energy of the electron, which alters the mass of atoms, thereby affecting gravity.
 
  Having arrived at an obviously wrong answer, you can introduce a cutoff that you guess at based on some dimensional analysis

There is a notion of absolute hot, which implies that momentum cannot grow unboundedly. 
 
and get an answer that's wrong by 120 orders of magnitude,

It's not wrong by 120 orders of magnitude, it's unexpectedly small by 120 orders of magnitude. Say you had a wheel, marking every number from 0 to 2Pi on a continuous range. And upon rolling it, you get 10^-120. This result is not "wrong" or "impossible", it's as likely as any other result. But a priori, you would not expect to get such a small number.
 
instead of infinitely.  And you then say this shows we know something like this must be right???


I never said it must be right. Only that no known alternative explanation exists for the cosmological constant problem, and that according to QFT, the vacuum energy shouldn't be zero, and is known to not be zero (e.g. casimir effect, lamb shift, and accelerated expansion of the universe, all count as evidence that it is non zero).
 
 
The first application of QFT to the problem gave the wrong answer by 120 orders of magnitude. 

Wrong is the wrong word here. The answer was unexpectedly small by that many orders of magnitude, but it is still within the range of possibility.

Which is exactly what's wrong with the idea of "fine-tuning".  The "range of possibility" is just pulled out of thin air.  Suppose life were possible for 1e-60 ev/m3  to 1e-20 ev/m3.  Would that be "fine-tuning"  because (1-e-20 - 1e-60)<<1  or because 30 orders of magnitude is small compared to infinity.

It depends on the probability distribution of the variable.

I think a more objective way to measure fine-tuning is to weigh universes and physical laws by their Kolmogorov complexity -- what's the shortest possible description that produces them?

The longer the length of the description, the more "tuning" was required to get there, and the rarer such universes are. In our case, Lambda would add ~120 digits to the cost of our universe in terms of additional information required to describe it.

If the multiverse is real, we should expect that the Kolmogorov complexity of our universe is not much greater than the minimum for universes that produce conscious life. (Perhaps further weighted in terms of the number of observers each such universe produces).

 

 
I don't know what prediction you're referring to, there have been several.  Can you cite the paper?

The prediction that the vacuum state contains energy, and that this energy under QFT is the sum of each of the field energies, some of which may be positive or negative, and when they are summed, they come out to be 120 orders of magnitude smaller than the Planck energy (which is the expected energy level of each field). I don't know of a reference to the paper, but I've read it was first calculated by Feynman and Wheeler. I also found this derivation: https://i.imgur.com/m0QhWOv.png


This paper gives three citations [6-8] to accompany this statement, which might also be useful to you:

"Nature contains two relative mass scales: the vacuum energy density V ∼ (10−30MPl) 4 and the weak scale v 2 ∼ (10−17MPl) 2 where v is the Higgs vacuum expectation value. Their smallness with respect to the Planck scale MPl = 1.2 1019 GeV is not understood and is considered as ‘unnatural’ in relativistic quantum field theory, because it seems to require precise cancellations among much larger contributions. If these cancellations happen for no fundamental reason, they are ‘unlikely’, in the sense that summing random order one numbers gives 10^−120 with a ‘probability’ of about 10^−120."

But who says the random number are order 1.

It's all just fantasizing.

It's using the Planck scale as the upper bound.

Jason

Jason Resch

unread,
Oct 24, 2020, 8:35:58 AM10/24/20
to Everything List
On Fri, Oct 23, 2020 at 10:21 PM <spudb...@aol.com> wrote:
The only issue I ever had (and I love good sci fi) is Everett's position (and Bryce DeWitt's, John Wheeler) that universes spawn on the pop of a decision.

I prefer the view that they're all already there. We just differentiate ourselves when our brains take in new information from the environment (which is the infinite set of all the possible states that all exist in the wave function).

This is expressed well in this tech talk: https://www.youtube.com/watch?v=dEaecUuEqfc

He calls it the zero universe interpretation which is similar to Zeh's Many-Minds and Bruno's many-dreams/dreamers (Everett in arithmetic).

Jason

 
Perhaps, I was wondering if it was something more cosmological, such as a supernova or a black hole, being both the initiator and modifier of such events? A conscious observer (defined as sensorially self aware) could be the only thing that matters, so, a machine intel, which has part of it's network imitating spindle cells found in the brains higher mammals may suffice? Despite this, biologists have for some years been surprised at the intelligence of birds such as crows. In this case a crow might suffice for cosmos splitting, and we could thus, conclude, that, indeed, bird is the word. 


LOL

Brent Meeker

unread,
Oct 24, 2020, 4:16:31 PM10/24/20
to everyth...@googlegroups.com
There's also a sense in which the temperature scale wraps around to negative values.  What does this have to do with anything?


 
and get an answer that's wrong by 120 orders of magnitude,

It's not wrong by 120 orders of magnitude,

The calculation is wrong because it purports to compute the vacuum energy density.


it's unexpectedly small by 120 orders of magnitude.

You mean the measured value it small...but not unexpectedly.  Most people expected it to be zero.


Say you had a wheel, marking every number from 0 to 2Pi on a continuous range. And upon rolling it, you get 10^-120. This result is not "wrong" or "impossible", it's as likely as any other result. But a priori, you would not expect to get such a small number.

A priori you wouldn't expect to get 1.0 either.  "Such a small number" just reflects our convenient naming conventions.  Notice that if you had labelled your wheel 0 to 360 degrees, then "You wouldn't expect to get such a small number as 1.0".


 
instead of infinitely.  And you then say this shows we know something like this must be right???


I never said it must be right. Only that no known alternative explanation exists for the cosmological constant problem, and that according to QFT, the vacuum energy shouldn't be zero, and is known to not be zero (e.g. casimir effect, lamb shift, and accelerated expansion of the universe, all count as evidence that it is non zero).
 
 
The first application of QFT to the problem gave the wrong answer by 120 orders of magnitude. 

Wrong is the wrong word here. The answer was unexpectedly small by that many orders of magnitude, but it is still within the range of possibility.

Which is exactly what's wrong with the idea of "fine-tuning".  The "range of possibility" is just pulled out of thin air.  Suppose life were possible for 1e-60 ev/m3  to 1e-20 ev/m3.  Would that be "fine-tuning"  because (1-e-20 - 1e-60)<<1  or because 30 orders of magnitude is small compared to infinity.

It depends on the probability distribution of the variable.

I think a more objective way to measure fine-tuning is to weigh universes and physical laws by their Kolmogorov complexity -- what's the shortest possible description that produces them?

Any finite law can be described in one word, like "Newton's".  Kolmogorov complexity measure only makes sense for infinite strings.



The longer the length of the description, the more "tuning" was required to get there, and the rarer such universes are.

Which is essentially assuming what you're trying to argue, i.e. that there is an infinite ensemble of "everythingism" and "fine-tuning" that is evidence for it.  The trouble is you keep needing to slip in assumptions equivalent to your conclusions.


In our case, Lambda would add ~120 digits to the cost of our universe in terms of additional information required to describe it.

If the multiverse is real, we should expect that the Kolmogorov complexity of our universe is not much greater than the minimum for universes that produce conscious life. (Perhaps further weighted in terms of the number of observers each such universe produces).

 

 
I don't know what prediction you're referring to, there have been several.  Can you cite the paper?

The prediction that the vacuum state contains energy, and that this energy under QFT is the sum of each of the field energies, some of which may be positive or negative, and when they are summed, they come out to be 120 orders of magnitude smaller than the Planck energy (which is the expected energy level of each field). I don't know of a reference to the paper, but I've read it was first calculated by Feynman and Wheeler. I also found this derivation: https://i.imgur.com/m0QhWOv.png


This paper gives three citations [6-8] to accompany this statement, which might also be useful to you:

"Nature contains two relative mass scales: the vacuum energy density V ∼ (10−30MPl) 4 and the weak scale v 2 ∼ (10−17MPl) 2 where v is the Higgs vacuum expectation value. Their smallness with respect to the Planck scale MPl = 1.2 1019 GeV is not understood and is considered as ‘unnatural’ in relativistic quantum field theory, because it seems to require precise cancellations among much larger contributions. If these cancellations happen for no fundamental reason, they are ‘unlikely’, in the sense that summing random order one numbers gives 10^−120 with a ‘probability’ of about 10^−120."

But who says the random number are order 1.

It's all just fantasizing.

It's using the Planck scale as the upper bound.

So what?  That's assuming the Planck scale means something, but it's already rejected as 'unatural'.  You can't have it both ways.

Brent

Lawrence Crowell

unread,
Oct 24, 2020, 6:02:02 PM10/24/20
to Everything List
The Planck length is the scale at which the Compton wavelength of a particle is equal to the circumference of a black hole. It is not hard to calculate. This is then the smallest scale at which information can be accessed. It is the smallest region where a qubit can be isolated.  With the accelerated expansion of the universe there are vacuum modes passing across the cosmic horizon. However, at the same time transPlanckian modes are stretched across this scale. It is also nature's way of providing a natural renormalization cut-off scale. 

LC

Brent Meeker

unread,
Oct 24, 2020, 6:22:32 PM10/24/20
to everyth...@googlegroups.com
But the context was calculation of the vacuum energy density.  Absent even a theory of quantum gravity I see no reason to take seriously the idea that the vacuum energy density consists of the ground state energy of the various quantum fields, much less that there is some "fine tuned" cancelation of the fermion and boson components.

Brent

Lawrence Crowell

unread,
Oct 24, 2020, 6:56:05 PM10/24/20
to Everything List
It is something to take seriously. We can consider it important in an interacting field theoretic sense, where disconnected virtual fields are removed by normal ordering. However, gravitation does rear its head and even the vacuum disconnected from other QFTs has gravitational content. 

LC 

Brent Meeker

unread,
Oct 24, 2020, 7:20:45 PM10/24/20
to everyth...@googlegroups.com
I'm not saying we shouldn't consider the vacuum energy density of the fields, but that the idea that there is an ensemble of universes densities so that we find ourselves in the one where they almost, but not quite, cancel out seems like no better than the Walter Cronkite explanation, "And that's the way it is."  For an ensemble theory to have any weight there would need to be theory that provided some probability distribution for the various fields.

Brent

spudb...@aol.com

unread,
Oct 24, 2020, 7:55:15 PM10/24/20
to jason...@gmail.com, everyth...@googlegroups.com
Roger, that, Jason. Deiter Zeh (Dr. Zaius from POTA) is kind of a light from the past, and Bruno's dreamers may indeed be true and seems to be trivially true, along with Hawking's many minds, & Bostrom's many sims. Which was backed up recently from a paper displayed on Discover-

for me, since whatever we can prove or merely just surmise, is that we all seem to be, like Captain Ahab, strapped to the back of the white whale, Moby, and if reality is just the Standard Model or a revised flavor of the Standard model, here we are-for a wee bit. For the psychological-physics mcGuffin, as in your Afterlife essay, the above paper you could add to your sim section perhaps, because going at 10^16 bits per restored mind. would according to the paper, provide enough computing space/power to restore everyone whose come a cropper, to be rebuilt. This was my thought on it, rather than the 'brain-gut in a jar' cartoon(s) that Bostrom and his friends claim could be now. Which begs the question, if this is just a sim of many sims' why not make me taller, or make me somebody, important, like an actor?

Lawrence Crowell

unread,
Oct 25, 2020, 7:42:14 AM10/25/20
to Everything List
There are now thousands of physicists, ranging from research graduate students to tenured professors, who are pouring over Feynman diagram calculations. They are pulling their hair out and are in a sort of mental agony. They are searching for this holy grail of finding how these diagrams will with Ward identities etc cancel terms and result in a finite answer. With the multiverse and computing generalized YM gauge 4-forms through D-branes the problem is far more vast. If there is anything this is telling us, it that this is not the right method.

If Sabine Hossenfelder has said one thing right it is that real progress in physics comes when a new theoretical construct solves a conflict or paradox in our understanding. My sense with these problems is entirely the same. Computing lots of perturbation terms or series is not going to give you the final answer. There is some simple statement on the nature of things that will allow us to bypass all of that. That is of course if there is some resolution to this problem at all. My thoughts on this is there is some dualism between local YM fields and nonlocal entanglements, where they are formally the same. In a nonlocal setting there is a topological order, while in the more local setting there are (super)symmetry protected topological fields

LC 
Reply all
Reply to author
Forward
0 new messages