
There is nothing wrong in particular with the idea of fine tuning. This does not logically imply a fine tuner. If there is a fine tuner, then it is reasonable to say there is fine tuning. However, the converse or modus tolens does not hold; fine tuning does not logically imply a fine tuner. Therefore, fine tuning is a necessary condition of a fine tuner, but not sufficient.
I started reading this, but it is clearly not something I am going to finish over early morning coffee. Yet the article so far covers in layman's terms stuff I am well acquainted with. The multiverse is often cited as a way around this. A vast plurality of cosmologies is a way to argue how the particular observable cosmos is fine tuned. It is similar to the argument with planets; given a large number of them it is not surprising that a few are such that life may emerge. Of course with this multiverse I suspect that many of these are not real cosmologies.The cosmological constant for all putative cosmologies in the string landscape, based on D-brane theory with gauge fluxes through branes wrapped on Calabi-Yau spaces, have cosmological constants Λ much larger than that for the observable universe. The Hubble constant H = (a'/a), a the scale factor and a' = da/dt, also equals H = √(Λc^2/3) is numerically H = 72km/sec-Mpc and 68km/sec-Mpc, where these two come from galaxy data and CMB data. This corresponds to a cosmological constant Λ ≃ 10^{-52}m^{-2}. Most putative cosmologies have much larger values, and many orders of magnitude larger. Such a de Sitter or FLRW spacetime would expand so rapidly that nothing could form. In fact many have Λ ≃ 10^{66}m/s^2 with the upper bound Λ ≃ 10^{70}m/s^2. The difference between this and what we observe is the 122 order of magnitude issue.
The observed cosmological constant is a manifestation of the quantum vacuum energy density, or in particular that vacuum energy density that plays a role in gravitation. This vacuum energy ρ defines the cosmological constant Λ = 8πGρ/3c^3 and for the observable universe this is quite small, far smaller than the 123 order of magnitude larger figure a naïve summation of QFT modes would suggest. However, there is a difference between the high energy vacuum, or called false vacuum, and the low energy physical vacuum. A quantum tunneling from the false to physical vacuum results in a gap of mass-energy density in every volume of space, and this generates matter and radiation. The sort of skewed Ginsburg-Landau potential involved is seen in the figure below.
There is a linear term in fields that skews this, and this I think is some manifestation of renormalization theory, where the large majority of these are analogous to virtual particles that give a mass-renormalization of cosmologies. This would I think sweep the vast majority of these out of ontological existence or classicality. I do not know if this is complete so there is the reduction of the multiverse to a single universe, or whether this is a reduction of the multiverse to a much smaller set.It has to be noted that the tuning for flat, spherical or hyperbolic geometry or topology of a spatial surface is not that hard to understand. The Hamiltonian for the Friedman-Lemaitre-Robertson-Walker (FLRW) spacetime isℋ = ½(a’/a)^2 - 4πGρ/3c^2 + k/a^2,so that the Hamiltonian constraint Nℋ = 0 in ADM general relativity means it is not hard to see this is zero. The energy density is ρ = ρ_vac + ρ_energy for the vacuum and mass-energy in the spacetime. The additional term k/a^2 gives flat, spherical and hyperbolic space for k = 0, k = 1 and k = -1. If k = 0 then the vacuum energy density is constant. This is in various ways more reasonable.In this renormalization possibility somehow the observable universe may have emerged. In ways not entirely clear this may have selected the world we observe. So there are open questions. Maybe even the role of conscious observers in the universe play some Wheeler delayed choice experiment in measuring the early universe to select for the observed universe.
LCOn Wednesday, October 14, 2020 at 9:38:40 PM UTC-5 Jason wrote:I just finished an article on all the science behind fine-tuning, and how the evidence suggests an infinite, and possibly complete reality. I thought others on this list might appreciate it:I welcome any discussion, feedback, or corrections.Jason
--
You received this message because you are subscribed to the Google Groups "Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email to everything-li...@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/everything-list/c67a54a2-64bc-4818-b8d5-c9bcf361940en%40googlegroups.com.
This is the second time. I tend to work mostly off line. That way I do not have an open port, and in some ways this is a big part of my defense against malware and hacking. If I am not online I can’t be attacked. However, in writing in the group editor, big mistake, I hit send and my message disappeared. Writing one long thing in Word, again off-line.
I read past the point of Hoyle’s tri-alpha physics. Too bad he did not get the Nobel for that, even though he was wrong on steady state theory. I have not gotten to the point about coincidence, providence and multiverse.
My thinking is that what is real is a quantum mechanical issue. Reality is the postulate that a system has some existential content prior to a measurement that is related to the outcome of that measurement. The EPR argument and Bell inequalities show you can’t have locality and reality applied as postulates to a system. You can use one or the other, but not both. So what is real, certainly if we appeal to Bohr is the classical world. The classical state of the universe is a set of quantum states that are stable against quantum noise and decoherence.
The upper bound on the cosmological constant is Λ = 1/ℓ_p^2 for ℓ_p the Planck length of 10^{-35}m. Therefore the Planck value of a cosmological constant for a quantum cosmology is Λ = 10^{70}m^{-2}. This is evaluate from
〈0|H|0〉 = sum_{n=0}^∞nħω = E_{planck} (with cut off at Planck energy_)
and with cosmological constant this is ~ 1/E_planck^2 = 10^{70}m^{-2} the observed is 10^{-52}m^{-2}. This is the source of the conundrum. What we observe is Λ = 10^{-52}m^{-2}. This is the source of this huge disparity. The Higgs field, which bears some relationship IMO to the quartic potential of inflationary cosmology, has M = 125GeV and it in a condensate with the weak interaction bosons confers mass to them. The Yukawa Lagrangians give fermions mass. This is very small, far smaller than the Planck energy. This with the wide gap in cosmological constants enforces a classicality. The domain of quantum gravitation is so far removed from quantum physics that decoherent large masses obey classical physics. Classicaliity in some way is what is reality.
In string/M-theory the cosmological constant emerges from Yang-Mills gauge fluxes through D-branes wrapped on Calabi-Yau compactified spaces. There are 10^{500} or more of these configurations, so this is a huge sample space. This is computed with the Hodge triangle of Eguchi-Hansen 3-forms. This is only really known for a static situation, which is still tough. Then there is the Vafa swampland, where it turns out strings and branes do not work in spacetimes with Λ > 0, and so things are broken here.
Finally when it comes to observers if k = 0 there are an infinite number of them and it might then be Wheeler delayed choice is an ensemble. This delayed choice measurement is where the slit an electron passed through is given by a measurement after the wave has passed the slits. So IGUS or ET in the universe may fix these values through their measurements.
--
You received this message because you are subscribed to the Google Groups "Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email to everything-li...@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/everything-list/CA%2BBCJUiipTLGN%3DLGdhyUMKMLPRvpUhxJk77rwvmLvgyf252EjA%40mail.gmail.com.
You should have read Vic Stenger's "The Fallacy of Fine Tuning". Vic points out how many examples of fine tuning are mis-conceived...including Hoyle's prediction of an excited state of carbon. Vic also points out the fallacy of just considering one parameter when the parameter space is high dimensional.
But my general criticism of fine-tuning is two-fold. First, the concept is not well defined. There is no apriori probability distribution over possible values. If the possible values are infinite, then any realized value is improbable. Fine tuning is all in the intuition. Charts are drawn showing little "we are here" zones to prove the fine tuning. But the scales are sometimes linear, sometimes logarithmic. And why those parameters and not the square?...or the square root? Bayesian inference is not invariant under change of parameters.
Second, calling it "fine-tuning" implies some kind of process of "tuning" or "selection". But that's gratuitous. Absent supernatural miracles, we must find ourselves in a universe in which we are nomologically possible. And that is true whether there is one universe or infinitely many. So it cannot be evidence one way or the other for the number of universes.
--
Brent
On 10/14/2020 7:38 PM, Jason Resch wrote:
I just finished an article on all the science behind fine-tuning, and how the evidence suggests an infinite, and possibly complete reality. I thought others on this list might appreciate it:--
I welcome any discussion, feedback, or corrections.
Jason
You received this message because you are subscribed to the Google Groups "Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email to everything-li...@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/everything-list/CA%2BBCJUiipTLGN%3DLGdhyUMKMLPRvpUhxJk77rwvmLvgyf252EjA%40mail.gmail.com.
You received this message because you are subscribed to the Google Groups "Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email to everything-li...@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/everything-list/39a9adbd-c687-634c-736a-3cfb940d6cd1%40verizon.net.
On Thu, Oct 15, 2020 at 1:56 PM 'Brent Meeker' via Everything List <everyth...@googlegroups.com> wrote:
You should have read Vic Stenger's "The Fallacy of Fine Tuning". Vic points out how many examples of fine tuning are mis-conceived...including Hoyle's prediction of an excited state of carbon. Vic also points out the fallacy of just considering one parameter when the parameter space is high dimensional.
Hi Brent,
Thanks for the suggestions. I did read Barnes's critique of TFOFT ( https://arxiv.org/abs/1112.4647 ) and I just now read Stenger's reply: https://arxiv.org/pdf/1202.4359.pdf
I think they both make some valid points. It may be that many parameters we believe are fine tuned will turn out to have other explanations. But I also think in domains where we do have understandings, such as in computational models (such as algorithmic information thery: what is the shortest program that produces X), or in the set of all possible cellular automata that only consider the states of adjacent cells, the number that are interesting (neither too simple nor too chaotic) is a small fraction of the total. So there is probably fine tuning, but it is, as you mention, extremely hard to quantify.
But my general criticism of fine-tuning is two-fold. First, the concept is not well defined. There is no apriori probability distribution over possible values. If the possible values are infinite, then any realized value is improbable. Fine tuning is all in the intuition. Charts are drawn showing little "we are here" zones to prove the fine tuning. But the scales are sometimes linear, sometimes logarithmic. And why those parameters and not the square?...or the square root? Bayesian inference is not invariant under change of parameters.
At least for the cosmological constant, there seems to be some understanding of its probability distribution, and it is relatively independent of the other parameters in that it is unrelated to nucleosynthesis, chemistry, etc. Therefore it is our best candidate to consider in isolation from the other parameters in the high-dimensional space.
Second, calling it "fine-tuning" implies some kind of process of "tuning" or "selection". But that's gratuitous. Absent supernatural miracles, we must find ourselves in a universe in which we are nomologically possible. And that is true whether there is one universe or infinitely many. So it cannot be evidence one way or the other for the number of universes.
Let's say we did have an understanding of the distribution of possible universes and the fraction of which supported conscious life. If we discover the fraction to be 1 in 1,000,000 would this not motivate a belief in there being more than one universe?
--
Jason
--
Brent
On 10/14/2020 7:38 PM, Jason Resch wrote:
I just finished an article on all the science behind fine-tuning, and how the evidence suggests an infinite, and possibly complete reality. I thought others on this list might appreciate it:--
I welcome any discussion, feedback, or corrections.
Jason
You received this message because you are subscribed to the Google Groups "Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email to everything-li...@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/everything-list/CA%2BBCJUiipTLGN%3DLGdhyUMKMLPRvpUhxJk77rwvmLvgyf252EjA%40mail.gmail.com.
You received this message because you are subscribed to the Google Groups "Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email to everything-li...@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/everything-list/39a9adbd-c687-634c-736a-3cfb940d6cd1%40verizon.net.
You received this message because you are subscribed to the Google Groups "Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email to everything-li...@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/everything-list/CA%2BBCJUiDSFtDjH%2BVN0j-6q%2BTUNq0N9c-25hd-cZJJowjciOSsg%40mail.gmail.com.
On 10/15/2020 12:46 PM, Jason Resch wrote:
On Thu, Oct 15, 2020 at 1:56 PM 'Brent Meeker' via Everything List <everyth...@googlegroups.com> wrote:
You should have read Vic Stenger's "The Fallacy of Fine Tuning". Vic points out how many examples of fine tuning are mis-conceived...including Hoyle's prediction of an excited state of carbon. Vic also points out the fallacy of just considering one parameter when the parameter space is high dimensional.
Hi Brent,
Thanks for the suggestions. I did read Barnes's critique of TFOFT ( https://arxiv.org/abs/1112.4647 ) and I just now read Stenger's reply: https://arxiv.org/pdf/1202.4359.pdf
I think they both make some valid points. It may be that many parameters we believe are fine tuned will turn out to have other explanations. But I also think in domains where we do have understandings, such as in computational models (such as algorithmic information thery: what is the shortest program that produces X), or in the set of all possible cellular automata that only consider the states of adjacent cells, the number that are interesting (neither too simple nor too chaotic) is a small fraction of the total. So there is probably fine tuning, but it is, as you mention, extremely hard to quantify.
But my general criticism of fine-tuning is two-fold. First, the concept is not well defined. There is no apriori probability distribution over possible values. If the possible values are infinite, then any realized value is improbable. Fine tuning is all in the intuition. Charts are drawn showing little "we are here" zones to prove the fine tuning. But the scales are sometimes linear, sometimes logarithmic. And why those parameters and not the square?...or the square root? Bayesian inference is not invariant under change of parameters.
At least for the cosmological constant, there seems to be some understanding of its probability distribution, and it is relatively independent of the other parameters in that it is unrelated to nucleosynthesis, chemistry, etc. Therefore it is our best candidate to consider in isolation from the other parameters in the high-dimensional space.
Second, calling it "fine-tuning" implies some kind of process of "tuning" or "selection". But that's gratuitous. Absent supernatural miracles, we must find ourselves in a universe in which we are nomologically possible. And that is true whether there is one universe or infinitely many. So it cannot be evidence one way or the other for the number of universes.
Let's say we did have an understanding of the distribution of possible universes and the fraction of which supported conscious life. If we discover the fraction to be 1 in 1,000,000 would this not motivate a belief in there being more than one universe?
No, because it is equally evidence that one universe (this one) was realized out of the ensemble. You are relying on an intuition that it is easier to explain why all 1,000,000 exist than to explain why this one exists.
But that's an intuition about explaining things, not about any objective probability. Every day things happen that are more improbable than a million-to-one. Until Everett no one thought it necessary to suppose all the counterfactuals happened "somewhere else".--
Brent
--
Jason
--
Brent
On 10/14/2020 7:38 PM, Jason Resch wrote:
I just finished an article on all the science behind fine-tuning, and how the evidence suggests an infinite, and possibly complete reality. I thought others on this list might appreciate it:--
I welcome any discussion, feedback, or corrections.
Jason
You received this message because you are subscribed to the Google Groups "Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email to everything-li...@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/everything-list/CA%2BBCJUiipTLGN%3DLGdhyUMKMLPRvpUhxJk77rwvmLvgyf252EjA%40mail.gmail.com.
You received this message because you are subscribed to the Google Groups "Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email to everything-li...@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/everything-list/39a9adbd-c687-634c-736a-3cfb940d6cd1%40verizon.net.
You received this message because you are subscribed to the Google Groups "Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email to everything-li...@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/everything-list/CA%2BBCJUiDSFtDjH%2BVN0j-6q%2BTUNq0N9c-25hd-cZJJowjciOSsg%40mail.gmail.com.
You received this message because you are subscribed to the Google Groups "Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email to everything-li...@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/everything-list/f12bc1d6-9aeb-d1ec-33b5-c640b8bdfa76%40verizon.net.
On 15 Oct 2020, at 04:38, Jason Resch <jason...@gmail.com> wrote:I just finished an article on all the science behind fine-tuning, and how the evidence suggests an infinite, and possibly complete reality. I thought others on this list might appreciate it:I welcome any discussion, feedback, or corrections.Fine-tuning + physical realism implies the many-things (many-worlds or many-histories, or many indexical relative-state).But fine tuning is a bit like superdeterminisme, it is a not much a theory, than something in need to be explained.
Fine-tuning + some hypothesis like existence and unicity of a “universe” might be seen as an evidence for a designer, but it make it only consistent or possible, which is far less than necessary. God or Universe ontological commitment are, like super-determinisms more like tool to abandon the research instead of digging on a problem, and discover perhaps something new. Fine tuning is close to being tautological, and cannot be used in an explanation, even if true.
With Digital Mechanism, there I no matter of choice, adding anything ontological to any universal machinery brings a contradiction, and the “many-worlds” is reduced to the many-computations, which we know to be emulated in the arithmetical reality (indeed the common part of arithmetic already assumed by all scientists, consciously or not). Now, with digital mechanism, the fine tuning is organised by the modes of self-reference, and all universal machine have the same modes, and their fist person perspective can be seen as self-fine tuner. (Even more so for []p & <>t and even still more so the same with the “& p”).What do you mean by “infinite complete reality”?
Realities or models are complete by definition. Also, “reality” is always ambiguous, as we don’t know if this refers to an arithmetical reality or a physical realty, or a psychological reality, etc.
On 15 Oct 2020, at 20:56, 'Brent Meeker' via Everything List <everyth...@googlegroups.com> wrote:You should have read Vic Stenger's "The Fallacy of Fine Tuning". Vic points out how many examples of fine tuning are mis-conceived...including Hoyle's prediction of an excited state of carbon. Vic also points out the fallacy of just considering one parameter when the parameter space is high dimensional.
But my general criticism of fine-tuning is two-fold. First, the concept is not well defined. There is no apriori probability distribution over possible values. If the possible values are infinite, then any realized value is improbable.
Fine tuning is all in the intuition. Charts are drawn showing little "we are here" zones to prove the fine tuning. But the scales are sometimes linear, sometimes logarithmic. And why those parameters and not the square?...or the square root? Bayesian inference is not invariant under change of parameters.
Second, calling it "fine-tuning" implies some kind of process of "tuning" or "selection". But that's gratuitous.
Absent supernatural miracles, we must find ourselves in a universe in which we are nomologically possible.
And that is true whether there is one universe or infinitely many.
So it cannot be evidence one way or the other for the number of universes.
Brent
On 10/14/2020 7:38 PM, Jason Resch wrote:
I just finished an article on all the science behind fine-tuning, and how the evidence suggests an infinite, and possibly complete reality. I thought others on this list might appreciate it:--
I welcome any discussion, feedback, or corrections.
Jason
You received this message because you are subscribed to the Google Groups "Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email to everything-li...@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/everything-list/CA%2BBCJUiipTLGN%3DLGdhyUMKMLPRvpUhxJk77rwvmLvgyf252EjA%40mail.gmail.com.
--
You received this message because you are subscribed to the Google Groups "Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email to everything-li...@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/everything-list/39a9adbd-c687-634c-736a-3cfb940d6cd1%40verizon.net.
On 15 Oct 2020, at 22:53, 'Brent Meeker' via Everything List <everyth...@googlegroups.com> wrote:
On 10/15/2020 12:46 PM, Jason Resch wrote:
On Thu, Oct 15, 2020 at 1:56 PM 'Brent Meeker' via Everything List <everyth...@googlegroups.com> wrote:
You should have read Vic Stenger's "The Fallacy of Fine Tuning". Vic points out how many examples of fine tuning are mis-conceived...including Hoyle's prediction of an excited state of carbon. Vic also points out the fallacy of just considering one parameter when the parameter space is high dimensional.
Hi Brent,
Thanks for the suggestions. I did read Barnes's critique of TFOFT ( https://arxiv.org/abs/1112.4647 ) and I just now read Stenger's reply: https://arxiv.org/pdf/1202.4359.pdf
I think they both make some valid points. It may be that many parameters we believe are fine tuned will turn out to have other explanations. But I also think in domains where we do have understandings, such as in computational models (such as algorithmic information thery: what is the shortest program that produces X), or in the set of all possible cellular automata that only consider the states of adjacent cells, the number that are interesting (neither too simple nor too chaotic) is a small fraction of the total. So there is probably fine tuning, but it is, as you mention, extremely hard to quantify.
But my general criticism of fine-tuning is two-fold. First, the concept is not well defined. There is no apriori probability distribution over possible values. If the possible values are infinite, then any realized value is improbable. Fine tuning is all in the intuition. Charts are drawn showing little "we are here" zones to prove the fine tuning. But the scales are sometimes linear, sometimes logarithmic. And why those parameters and not the square?...or the square root? Bayesian inference is not invariant under change of parameters.
At least for the cosmological constant, there seems to be some understanding of its probability distribution, and it is relatively independent of the other parameters in that it is unrelated to nucleosynthesis, chemistry, etc. Therefore it is our best candidate to consider in isolation from the other parameters in the high-dimensional space.
Second, calling it "fine-tuning" implies some kind of process of "tuning" or "selection". But that's gratuitous. Absent supernatural miracles, we must find ourselves in a universe in which we are nomologically possible. And that is true whether there is one universe or infinitely many. So it cannot be evidence one way or the other for the number of universes.
Let's say we did have an understanding of the distribution of possible universes and the fraction of which supported conscious life. If we discover the fraction to be 1 in 1,000,000 would this not motivate a belief in there being more than one universe?
No, because it is equally evidence that one universe (this one) was realized out of the ensemble. You are relying on an intuition that it is easier to explain why all 1,000,000 exist than to explain why this one exists. But that's an intuition about explaining things, not about any objective probability. Every day things happen that are more improbable than a million-to-one.
Until Everett no one thought it necessary to suppose all the counterfactuals happened "somewhere else”.
Brent
--
Jason
--
Brent
On 10/14/2020 7:38 PM, Jason Resch wrote:
I just finished an article on all the science behind fine-tuning, and how the evidence suggests an infinite, and possibly complete reality. I thought others on this list might appreciate it:--
I welcome any discussion, feedback, or corrections.
Jason
You received this message because you are subscribed to the Google Groups "Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email to everything-li...@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/everything-list/CA%2BBCJUiipTLGN%3DLGdhyUMKMLPRvpUhxJk77rwvmLvgyf252EjA%40mail.gmail.com.
You received this message because you are subscribed to the Google Groups "Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email to everything-li...@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/everything-list/39a9adbd-c687-634c-736a-3cfb940d6cd1%40verizon.net.
You received this message because you are subscribed to the Google Groups "Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email to everything-li...@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/everything-list/CA%2BBCJUiDSFtDjH%2BVN0j-6q%2BTUNq0N9c-25hd-cZJJowjciOSsg%40mail.gmail.com.
--
You received this message because you are subscribed to the Google Groups "Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email to everything-li...@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/everything-list/f12bc1d6-9aeb-d1ec-33b5-c640b8bdfa76%40verizon.net.
On 18 Oct 2020, at 19:08, Jason Resch <jason...@gmail.com> wrote:On Fri, Oct 16, 2020 at 6:44 AM Bruno Marchal <mar...@ulb.ac.be> wrote:On 15 Oct 2020, at 04:38, Jason Resch <jason...@gmail.com> wrote:I just finished an article on all the science behind fine-tuning, and how the evidence suggests an infinite, and possibly complete reality. I thought others on this list might appreciate it:I welcome any discussion, feedback, or corrections.Fine-tuning + physical realism implies the many-things (many-worlds or many-histories, or many indexical relative-state).But fine tuning is a bit like superdeterminisme, it is a not much a theory, than something in need to be explained.I agree, fine-tuning is something that calls for explanation, an apparent mystery,Fine-tuning + some hypothesis like existence and unicity of a “universe” might be seen as an evidence for a designer, but it make it only consistent or possible, which is far less than necessary. God or Universe ontological commitment are, like super-determinisms more like tool to abandon the research instead of digging on a problem, and discover perhaps something new. Fine tuning is close to being tautological, and cannot be used in an explanation, even if true.I consider the appearance of fine-tuning as evidence for a reality that is much greater than the universe we can see.
With Digital Mechanism, there I no matter of choice, adding anything ontological to any universal machinery brings a contradiction, and the “many-worlds” is reduced to the many-computations, which we know to be emulated in the arithmetical reality (indeed the common part of arithmetic already assumed by all scientists, consciously or not). Now, with digital mechanism, the fine tuning is organised by the modes of self-reference, and all universal machine have the same modes, and their fist person perspective can be seen as self-fine tuner. (Even more so for []p & <>t and even still more so the same with the “& p”).What do you mean by “infinite complete reality”?By complete I mean that anything that is possible to exist does. I see your point though that "possible" depends on the model one assumes, which I did leave open.
Realities or models are complete by definition. Also, “reality” is always ambiguous, as we don’t know if this refers to an arithmetical reality or a physical realty, or a psychological reality, etc.That's a good point, and it does need clarification. I will be sure to make the assumed model clear when I write on the subject of "why does anything exist", for which arithmetic appears to be the simplest model compatible with our current observations.
Jason--
You received this message because you are subscribed to the Google Groups "Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email to everything-li...@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/everything-list/CA%2BBCJUhQO2mF7zTDmA3mxXgcC_9DvP5HRvj225uyG-TMCC5UDA%40mail.gmail.com.
On 15 Oct 2020, at 20:56, 'Brent Meeker' via Everything List <everyth...@googlegroups.com> wrote:
You should have read Vic Stenger's "The Fallacy of Fine Tuning". Vic points out how many examples of fine tuning are mis-conceived...including Hoyle's prediction of an excited state of carbon. Vic also points out the fallacy of just considering one parameter when the parameter space is high dimensional.
But my general criticism of fine-tuning is two-fold. First, the concept is not well defined. There is no apriori probability distribution over possible values. If the possible values are infinite, then any realized value is improbable.
I don’t think so. That is why Kolmogorov defines a measure space by forbidding infinite intersection of events. In the finite case the space of events is the complete boolean structure coming from the subset of the set of the possible results. In the infinite domain, the measure space os defined by a strict subset. I miss perhaps something, but the axiomatic of Kolmogorov has been invented to solve that “infinite number of value” problem.
But I do agree that fine-tuning is not always well defined and sometimes misused. Yet I agree that the choice is between a fine tuner (but who is it, and how does it the selection. Even if real, a fine tuner explains nothing without some explanation of where the fine tuner comes from. In a multiverse or milti-computations (le the sigma_1 arithmetic) consciousness is the fine tuner, and that one is explained already by the (Löbianà universal machine.
Fine tuning is all in the intuition. Charts are drawn showing little "we are here" zones to prove the fine tuning. But the scales are sometimes linear, sometimes logarithmic. And why those parameters and not the square?...or the square root? Bayesian inference is not invariant under change of parameters.
That depends on your OMEGA in the probability space, and the measure you put on the set of events.
Second, calling it "fine-tuning" implies some kind of process of "tuning" or "selection". But that's gratuitous.
Yes. Ad Hoc, and it hides the problem by a bigger problem, instead of solving it.
Absent supernatural miracles, we must find ourselves in a universe in which we are nomologically possible.
That will be the relative histories with measure near one. Sort of history-neighbourhoods.
And that is true whether there is one universe or infinitely many.
… or none.
So it cannot be evidence one way or the other for the number of universes.
To count the universes, we should be able to be clearer on what such term means.
Bruno
Brent
On 10/14/2020 7:38 PM, Jason Resch wrote:
I just finished an article on all the science behind fine-tuning, and how the evidence suggests an infinite, and possibly complete reality. I thought others on this list might appreciate it:--
I welcome any discussion, feedback, or corrections.
Jason
You received this message because you are subscribed to the Google Groups "Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email to everything-li...@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/everything-list/CA%2BBCJUiipTLGN%3DLGdhyUMKMLPRvpUhxJk77rwvmLvgyf252EjA%40mail.gmail.com.
--
You received this message because you are subscribed to the Google Groups "Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email to everything-li...@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/everything-list/39a9adbd-c687-634c-736a-3cfb940d6cd1%40verizon.net.
--
You received this message because you are subscribed to the Google Groups "Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email to everything-li...@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/everything-list/F57D6D8E-2602-4AD7-9178-F51CE121E207%40ulb.ac.be.
On 15 Oct 2020, at 22:53, 'Brent Meeker' via Everything List <everyth...@googlegroups.com> wrote:
On 10/15/2020 12:46 PM, Jason Resch wrote:
On Thu, Oct 15, 2020 at 1:56 PM 'Brent Meeker' via Everything List <everyth...@googlegroups.com> wrote:
You should have read Vic Stenger's "The Fallacy of Fine Tuning". Vic points out how many examples of fine tuning are mis-conceived...including Hoyle's prediction of an excited state of carbon. Vic also points out the fallacy of just considering one parameter when the parameter space is high dimensional.
Hi Brent,
Thanks for the suggestions. I did read Barnes's critique of TFOFT ( https://arxiv.org/abs/1112.4647 ) and I just now read Stenger's reply: https://arxiv.org/pdf/1202.4359.pdf
I think they both make some valid points. It may be that many parameters we believe are fine tuned will turn out to have other explanations. But I also think in domains where we do have understandings, such as in computational models (such as algorithmic information thery: what is the shortest program that produces X), or in the set of all possible cellular automata that only consider the states of adjacent cells, the number that are interesting (neither too simple nor too chaotic) is a small fraction of the total. So there is probably fine tuning, but it is, as you mention, extremely hard to quantify.
But my general criticism of fine-tuning is two-fold. First, the concept is not well defined. There is no apriori probability distribution over possible values. If the possible values are infinite, then any realized value is improbable. Fine tuning is all in the intuition. Charts are drawn showing little "we are here" zones to prove the fine tuning. But the scales are sometimes linear, sometimes logarithmic. And why those parameters and not the square?...or the square root? Bayesian inference is not invariant under change of parameters.
At least for the cosmological constant, there seems to be some understanding of its probability distribution, and it is relatively independent of the other parameters in that it is unrelated to nucleosynthesis, chemistry, etc. Therefore it is our best candidate to consider in isolation from the other parameters in the high-dimensional space.
Second, calling it "fine-tuning" implies some kind of process of "tuning" or "selection". But that's gratuitous. Absent supernatural miracles, we must find ourselves in a universe in which we are nomologically possible. And that is true whether there is one universe or infinitely many. So it cannot be evidence one way or the other for the number of universes.
Let's say we did have an understanding of the distribution of possible universes and the fraction of which supported conscious life. If we discover the fraction to be 1 in 1,000,000 would this not motivate a belief in there being more than one universe?
No, because it is equally evidence that one universe (this one) was realized out of the ensemble. You are relying on an intuition that it is easier to explain why all 1,000,000 exist than to explain why this one exists. But that's an intuition about explaining things, not about any objective probability. Every day things happen that are more improbable than a million-to-one.
You need to take all the histories, which we know exists in arithmetic,
then consciousness will differentiate on those histories which seems to be fine tuned. Like you say, we have to eliminate the selector, except for consciousness.
Until Everett no one thought it necessary to suppose all the counterfactuals happened "somewhere else”.
Well, there was Borgess of course, and the idea is present in the whole neoplatonism, arguably. Then, for any one who believes that 777 is odd independently of him/herself, all computations are run independently of anyone.
On 10/20/2020 5:39 AM, Bruno Marchal wrote:
On 15 Oct 2020, at 20:56, 'Brent Meeker' via Everything List <everyth...@googlegroups.com> wrote:
You should have read Vic Stenger's "The Fallacy of Fine Tuning". Vic points out how many examples of fine tuning are mis-conceived...including Hoyle's prediction of an excited state of carbon. Vic also points out the fallacy of just considering one parameter when the parameter space is high dimensional.
But my general criticism of fine-tuning is two-fold. First, the concept is not well defined. There is no apriori probability distribution over possible values. If the possible values are infinite, then any realized value is improbable.
I don’t think so. That is why Kolmogorov defines a measure space by forbidding infinite intersection of events. In the finite case the space of events is the complete boolean structure coming from the subset of the set of the possible results. In the infinite domain, the measure space os defined by a strict subset. I miss perhaps something, but the axiomatic of Kolmogorov has been invented to solve that “infinite number of value” problem.
That's a non-answer. I was just using infinite (as physicists do) to mean bigger than anything we're thinking of. Kolmogorov just shaped his definition to make the mathematics simpler. There's nothing in Jason's analyses that defines the variables as finite. Jason just helps jimself to an intuition that a value between 7.5 and 7.7 is "fine-tuned". He didn't first justify the finite interval.
Jason
However, see my explanation for the cosmological constant, a value for which the theory can account for the expected range and probability distribution.
as a consequence of the equation defined here: ftp://ftp.math.ethz.ch/hg/EMIS/journals/AMI/2003/jones.pdf
Jason
--
You received this message because you are subscribed to the Google Groups "Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email to everything-li...@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/everything-list/CA%2BBCJUhQXt6v60%2Bdqw0e%2BjjwKZLJG%2BTR-%2BY-7w96r6kK%3DABZnw%40mail.gmail.com.
On 20 Oct 2020, at 20:22, 'Brent Meeker' via Everything List <everyth...@googlegroups.com> wrote:
On 10/20/2020 5:39 AM, Bruno Marchal wrote:
On 15 Oct 2020, at 20:56, 'Brent Meeker' via Everything List <everyth...@googlegroups.com> wrote:
You should have read Vic Stenger's "The Fallacy of Fine Tuning". Vic points out how many examples of fine tuning are mis-conceived...including Hoyle's prediction of an excited state of carbon. Vic also points out the fallacy of just considering one parameter when the parameter space is high dimensional.
But my general criticism of fine-tuning is two-fold. First, the concept is not well defined. There is no apriori probability distribution over possible values. If the possible values are infinite, then any realized value is improbable.
I don’t think so. That is why Kolmogorov defines a measure space by forbidding infinite intersection of events. In the finite case the space of events is the complete boolean structure coming from the subset of the set of the possible results. In the infinite domain, the measure space os defined by a strict subset. I miss perhaps something, but the axiomatic of Kolmogorov has been invented to solve that “infinite number of value” problem.
That's a non-answer. I was just using infinite (as physicists do) to mean bigger than anything we're thinking of. Kolmogorov just shaped his definition to make the mathematics simpler. There's nothing in Jason's analyses that defines the variables as finite. Jason just helps jimself to an intuition that a value between 7.5 and 7.7 is "fine-tuned". He didn't first justify the finite interval.
But I do agree that fine-tuning is not always well defined and sometimes misused. Yet I agree that the choice is between a fine tuner (but who is it, and how does it the selection. Even if real, a fine tuner explains nothing without some explanation of where the fine tuner comes from. In a multiverse or milti-computations (le the sigma_1 arithmetic) consciousness is the fine tuner, and that one is explained already by the (Löbianà universal machine.
Fine tuning is all in the intuition. Charts are drawn showing little "we are here" zones to prove the fine tuning. But the scales are sometimes linear, sometimes logarithmic. And why those parameters and not the square?...or the square root? Bayesian inference is not invariant under change of parameters.
That depends on your OMEGA in the probability space, and the measure you put on the set of events.
Exactly so.
Second, calling it "fine-tuning" implies some kind of process of "tuning" or "selection". But that's gratuitous.
Yes. Ad Hoc, and it hides the problem by a bigger problem, instead of solving it.
Absent supernatural miracles, we must find ourselves in a universe in which we are nomologically possible.
That will be the relative histories with measure near one. Sort of history-neighbourhoods.
By what measure?
To view this discussion on the web visit https://groups.google.com/d/msgid/everything-list/474c3fb1-8cac-31c2-b1d8-bc1f92cffe0b%40verizon.net.
On 20 Oct 2020, at 20:26, 'Brent Meeker' via Everything List <everyth...@googlegroups.com> wrote:
On 10/20/2020 5:44 AM, Bruno Marchal wrote:
On 15 Oct 2020, at 22:53, 'Brent Meeker' via Everything List <everyth...@googlegroups.com> wrote:
On 10/15/2020 12:46 PM, Jason Resch wrote:
On Thu, Oct 15, 2020 at 1:56 PM 'Brent Meeker' via Everything List <everyth...@googlegroups.com> wrote:
You should have read Vic Stenger's "The Fallacy of Fine Tuning". Vic points out how many examples of fine tuning are mis-conceived...including Hoyle's prediction of an excited state of carbon. Vic also points out the fallacy of just considering one parameter when the parameter space is high dimensional.
Hi Brent,
Thanks for the suggestions. I did read Barnes's critique of TFOFT ( https://arxiv.org/abs/1112.4647 ) and I just now read Stenger's reply: https://arxiv.org/pdf/1202.4359.pdf
I think they both make some valid points. It may be that many parameters we believe are fine tuned will turn out to have other explanations. But I also think in domains where we do have understandings, such as in computational models (such as algorithmic information thery: what is the shortest program that produces X), or in the set of all possible cellular automata that only consider the states of adjacent cells, the number that are interesting (neither too simple nor too chaotic) is a small fraction of the total. So there is probably fine tuning, but it is, as you mention, extremely hard to quantify.
But my general criticism of fine-tuning is two-fold. First, the concept is not well defined. There is no apriori probability distribution over possible values. If the possible values are infinite, then any realized value is improbable. Fine tuning is all in the intuition. Charts are drawn showing little "we are here" zones to prove the fine tuning. But the scales are sometimes linear, sometimes logarithmic. And why those parameters and not the square?...or the square root? Bayesian inference is not invariant under change of parameters.
At least for the cosmological constant, there seems to be some understanding of its probability distribution, and it is relatively independent of the other parameters in that it is unrelated to nucleosynthesis, chemistry, etc. Therefore it is our best candidate to consider in isolation from the other parameters in the high-dimensional space.
Second, calling it "fine-tuning" implies some kind of process of "tuning" or "selection". But that's gratuitous. Absent supernatural miracles, we must find ourselves in a universe in which we are nomologically possible. And that is true whether there is one universe or infinitely many. So it cannot be evidence one way or the other for the number of universes.
Let's say we did have an understanding of the distribution of possible universes and the fraction of which supported conscious life. If we discover the fraction to be 1 in 1,000,000 would this not motivate a belief in there being more than one universe?
No, because it is equally evidence that one universe (this one) was realized out of the ensemble. You are relying on an intuition that it is easier to explain why all 1,000,000 exist than to explain why this one exists. But that's an intuition about explaining things, not about any objective probability. Every day things happen that are more improbable than a million-to-one.
You need to take all the histories, which we know exists in arithmetic,
I don't know what "exists in arithmetic" has to do with existence.
then consciousness will differentiate on those histories which seems to be fine tuned. Like you say, we have to eliminate the selector, except for consciousness.
Until Everett no one thought it necessary to suppose all the counterfactuals happened "somewhere else”.
Well, there was Borgess of course, and the idea is present in the whole neoplatonism, arguably. Then, for any one who believes that 777 is odd independently of him/herself, all computations are run independently of anyone.
That's a non-sequitur. One can try dividing 777 by 2. One can't verify all computations are independently or dependently of anyone.
Brent
--
You received this message because you are subscribed to the Google Groups "Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email to everything-li...@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/everything-list/00e4f907-fcf0-4e9e-83de-917558e673cf%40verizon.net.
On 10/20/2020 1:27 PM, Jason Resch wrote:
On Tue, Oct 20, 2020 at 1:26 PM 'Brent Meeker' via Everything List <everyth...@googlegroups.com> wrote:
On 10/20/2020 5:44 AM, Bruno Marchal wrote:
On 15 Oct 2020, at 22:53, 'Brent Meeker' via Everything List <everyth...@googlegroups.com> wrote:
On 10/15/2020 12:46 PM, Jason Resch wrote:
On Thu, Oct 15, 2020 at 1:56 PM 'Brent Meeker' via Everything List <everyth...@googlegroups.com> wrote:
You should have read Vic Stenger's "The Fallacy of Fine Tuning". Vic points out how many examples of fine tuning are mis-conceived...including Hoyle's prediction of an excited state of carbon. Vic also points out the fallacy of just considering one parameter when the parameter space is high dimensional.
Hi Brent,
Thanks for the suggestions. I did read Barnes's critique of TFOFT ( https://arxiv.org/abs/1112.4647 ) and I just now read Stenger's reply: https://arxiv.org/pdf/1202.4359.pdf
I think they both make some valid points. It may be that many parameters we believe are fine tuned will turn out to have other explanations. But I also think in domains where we do have understandings, such as in computational models (such as algorithmic information thery: what is the shortest program that produces X), or in the set of all possible cellular automata that only consider the states of adjacent cells, the number that are interesting (neither too simple nor too chaotic) is a small fraction of the total. So there is probably fine tuning, but it is, as you mention, extremely hard to quantify.
But my general criticism of fine-tuning is two-fold. First, the concept is not well defined. There is no apriori probability distribution over possible values. If the possible values are infinite, then any realized value is improbable. Fine tuning is all in the intuition. Charts are drawn showing little "we are here" zones to prove the fine tuning. But the scales are sometimes linear, sometimes logarithmic. And why those parameters and not the square?...or the square root? Bayesian inference is not invariant under change of parameters.
At least for the cosmological constant, there seems to be some understanding of its probability distribution, and it is relatively independent of the other parameters in that it is unrelated to nucleosynthesis, chemistry, etc. Therefore it is our best candidate to consider in isolation from the other parameters in the high-dimensional space.
Second, calling it "fine-tuning" implies some kind of process of "tuning" or "selection". But that's gratuitous. Absent supernatural miracles, we must find ourselves in a universe in which we are nomologically possible. And that is true whether there is one universe or infinitely many. So it cannot be evidence one way or the other for the number of universes.
Let's say we did have an understanding of the distribution of possible universes and the fraction of which supported conscious life. If we discover the fraction to be 1 in 1,000,000 would this not motivate a belief in there being more than one universe?
No, because it is equally evidence that one universe (this one) was realized out of the ensemble. You are relying on an intuition that it is easier to explain why all 1,000,000 exist than to explain why this one exists. But that's an intuition about explaining things, not about any objective probability. Every day things happen that are more improbable than a million-to-one.
You need to take all the histories, which we know exists in arithmetic,
I don't know what "exists in arithmetic" has to do with existence.
then consciousness will differentiate on those histories which seems to be fine tuned. Like you say, we have to eliminate the selector, except for consciousness.
Until Everett no one thought it necessary to suppose all the counterfactuals happened "somewhere else”.
Well, there was Borgess of course, and the idea is present in the whole neoplatonism, arguably. Then, for any one who believes that 777 is odd independently of him/herself, all computations are run independently of anyone.
That's a non-sequitur. One can try dividing 777 by 2. One can't verify all computations are independently or dependently of anyone.
If you accept the independent truth of the equation "Y = 2X+1" in the case of "Y=777" and an integer X, then you should likewise also accept the existence of all computations,
How can you be so casual about leaping from "This statement is true." to "The relation it expresses entails that the relata exist." "True" and "exist" are even different words.
"Watson is the companion of Holmes" is true in many logics (just note that it's negation is false) yet nobody thinks it makes Sherlock Holmes into a person who existed.
In mathematics, "exists" means has a value that satifies (makes true) and expression. It says nothing about whether you can kick it and whether it kicks back.
Brent--
--as a consequence of the equation defined here: ftp://ftp.math.ethz.ch/hg/EMIS/journals/AMI/2003/jones.pdf
Jason
You received this message because you are subscribed to the Google Groups "Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email to everything-li...@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/everything-list/CA%2BBCJUhQXt6v60%2Bdqw0e%2BjjwKZLJG%2BTR-%2BY-7w96r6kK%3DABZnw%40mail.gmail.com.
You received this message because you are subscribed to the Google Groups "Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email to everything-li...@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/everything-list/3614d16d-3478-95a5-0da9-b93c8fb40aa4%40verizon.net.
On 20 Oct 2020, at 20:26, 'Brent Meeker' via Everything List <everyth...@googlegroups.com> wrote:
On 10/20/2020 5:44 AM, Bruno Marchal wrote:
On 15 Oct 2020, at 22:53, 'Brent Meeker' via Everything List <everyth...@googlegroups.com> wrote:
On 10/15/2020 12:46 PM, Jason Resch wrote:
On Thu, Oct 15, 2020 at 1:56 PM 'Brent Meeker' via Everything List <everyth...@googlegroups.com> wrote:
You should have read Vic Stenger's "The Fallacy of Fine Tuning". Vic points out how many examples of fine tuning are mis-conceived...including Hoyle's prediction of an excited state of carbon. Vic also points out the fallacy of just considering one parameter when the parameter space is high dimensional.
Hi Brent,
Thanks for the suggestions. I did read Barnes's critique of TFOFT ( https://arxiv.org/abs/1112.4647 ) and I just now read Stenger's reply: https://arxiv.org/pdf/1202.4359.pdf
I think they both make some valid points. It may be that many parameters we believe are fine tuned will turn out to have other explanations. But I also think in domains where we do have understandings, such as in computational models (such as algorithmic information thery: what is the shortest program that produces X), or in the set of all possible cellular automata that only consider the states of adjacent cells, the number that are interesting (neither too simple nor too chaotic) is a small fraction of the total. So there is probably fine tuning, but it is, as you mention, extremely hard to quantify.
But my general criticism of fine-tuning is two-fold. First, the concept is not well defined. There is no apriori probability distribution over possible values. If the possible values are infinite, then any realized value is improbable. Fine tuning is all in the intuition. Charts are drawn showing little "we are here" zones to prove the fine tuning. But the scales are sometimes linear, sometimes logarithmic. And why those parameters and not the square?...or the square root? Bayesian inference is not invariant under change of parameters.
At least for the cosmological constant, there seems to be some understanding of its probability distribution, and it is relatively independent of the other parameters in that it is unrelated to nucleosynthesis, chemistry, etc. Therefore it is our best candidate to consider in isolation from the other parameters in the high-dimensional space.
Second, calling it "fine-tuning" implies some kind of process of "tuning" or "selection". But that's gratuitous. Absent supernatural miracles, we must find ourselves in a universe in which we are nomologically possible. And that is true whether there is one universe or infinitely many. So it cannot be evidence one way or the other for the number of universes.
Let's say we did have an understanding of the distribution of possible universes and the fraction of which supported conscious life. If we discover the fraction to be 1 in 1,000,000 would this not motivate a belief in there being more than one universe?
No, because it is equally evidence that one universe (this one) was realized out of the ensemble. You are relying on an intuition that it is easier to explain why all 1,000,000 exist than to explain why this one exists. But that's an intuition about explaining things, not about any objective probability. Every day things happen that are more improbable than a million-to-one.
You need to take all the histories, which we know exists in arithmetic,
I don't know what "exists in arithmetic" has to do with existence.
We cannot prove the existence of any universal machinery without assuming at least one of them.
Any of them will do, so I can assume at the start the one most people are already familiar with: very elementary arithmetic (Robinson Arithmetic). So I assume 0, s(0), s(s(0)), etc.
By exist (fundamentally, or ontologically, or “really”) I mean the existence of 0, 1, 2, 3, … Only that exists ontologically.
By an observer I mean a number n such that phi_n is a Turing universal and Löbian function (or n is Turing Löbian machine, and I note “[]” its provability predicate. Incompleteness imposes to that machine/number to distinguish the 8 modes (that I have described many times, OK?).For each mode [i] we have a corresponding notion of phenomenological existence. With [0] = Gödel’s beweisbar ([]), and [i] = one of the seven remaining modes, for example [1]p = [0]p & p, [2]p = [0]p & <0>t & p, etc. (p always sigma_1).
Psychological existence can then be defined by [1](Ex [1] P(x), physical existence is something like [2]<2>(Ex [2]<2> P(x), etc.
There is only one reality (the “standard model of arithmetic”), but with many internal modes which are the modes corresponding to the modal variant of “[]”, which are imposed to incompleteness. Both psychology and physics are given by different modes of view, on the same (arithmetical) reality.
Again, I could use any universal machinery, they all gives the same 8 modes.
It might well be wrong, but that would be more surprising to me than the idea of an anthropic selection process operating in a multiverse.
Jason
--
You received this message because you are subscribed to the Google Groups "Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email to everything-li...@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/everything-list/CA%2BBCJUiJ5c3kakLGC73zuRGX-6gafk0W7NGhuJJn9VOQEtzriA%40mail.gmail.com.
In mathematics, "exists" means has a value that satifies (makes true) and expression. It says nothing about whether you can kick it and whether it kicks back.
Kicking back occurs from the perspective of entities existing who live within long computational histories which occur in platonically existing computational threads, which exist if you assume arithmetical truth.
On 10/23/2020 8:15 AM, Jason Resch wrote:
On Tue, Oct 20, 2020 at 4:37 PM 'Brent Meeker' via Everything List <everyth...@googlegroups.com> wrote:
On 10/20/2020 1:20 PM, Jason Resch wrote:
On Tue, Oct 20, 2020 at 1:23 PM 'Brent Meeker' via Everything List <everyth...@googlegroups.com> wrote:
On 10/20/2020 5:39 AM, Bruno Marchal wrote:
On 15 Oct 2020, at 20:56, 'Brent Meeker' via Everything List <everyth...@googlegroups.com> wrote:
You should have read Vic Stenger's "The Fallacy of Fine Tuning". Vic points out how many examples of fine tuning are mis-conceived...including Hoyle's prediction of an excited state of carbon. Vic also points out the fallacy of just considering one parameter when the parameter space is high dimensional.
But my general criticism of fine-tuning is two-fold. First, the concept is not well defined. There is no apriori probability distribution over possible values. If the possible values are infinite, then any realized value is improbable.
I don’t think so. That is why Kolmogorov defines a measure space by forbidding infinite intersection of events. In the finite case the space of events is the complete boolean structure coming from the subset of the set of the possible results. In the infinite domain, the measure space os defined by a strict subset. I miss perhaps something, but the axiomatic of Kolmogorov has been invented to solve that “infinite number of value” problem.
That's a non-answer. I was just using infinite (as physicists do) to mean bigger than anything we're thinking of. Kolmogorov just shaped his definition to make the mathematics simpler. There's nothing in Jason's analyses that defines the variables as finite. Jason just helps jimself to an intuition that a value between 7.5 and 7.7 is "fine-tuned". He didn't first justify the finite interval.
I admit as much in the article. For most parameters, we don't understand the range or probability distribution for the constants.
Then how can you assert there is fine tuning. Is a value of 20+1 qualify? Does it matter whether the possible range was (0,100) or (19,21)?
However, see my explanation for the cosmological constant, a value for which the theory can account for the expected range and probability distribution.
That's right, there is a theory that tells us something about a range and probability distribution. But it's far from an accepted theory, and might well be wrong.
It comes out of QFT, perhaps our most strongly tested theory in science, at least one that offers the most accurate verified prediction in physics.
That "comes out of" is very misleading, since it's applying QFT to general relativity which is not even a quantum theory.
The first application of QFT to the problem gave the wrong answer by 120 orders of magnitude.
I don't know what prediction you're referring to, there have been several. Can you cite the paper?
"Nature contains two relative mass scales: the vacuum energy density V ∼ (10−30MPl) 4 and the weak scale v 2 ∼ (10−17MPl) 2 where v is the Higgs vacuum expectation value. Their smallness with respect to the Planck scale MPl = 1.2 1019 GeV is not understood and is considered as ‘unnatural’ in relativistic quantum field theory, because it seems to require precise cancellations among much larger contributions. If these cancellations happen for no fundamental reason, they are ‘unlikely’, in the sense that summing random order one numbers gives 10^−120 with a ‘probability’ of about 10^−120."
"No natural theoretical alternatives are known (for example, supergravity does not select V = 0 as a special point [1]), and anthropic selection of the cosmological constant seems possible in theories with some tens of scalars such that their potential has more than 10^120 different vacua, which get ‘populated’ forming a ‘multiverse’ through eternal inflation. String theory could realise this scenario [6–8]."
--
Brent
--It might well be wrong, but that would be more surprising to me than the idea of an anthropic selection process operating in a multiverse.
Jason
You received this message because you are subscribed to the Google Groups "Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email to everything-li...@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/everything-list/CA%2BBCJUiJ5c3kakLGC73zuRGX-6gafk0W7NGhuJJn9VOQEtzriA%40mail.gmail.com.
You received this message because you are subscribed to the Google Groups "Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email to everything-li...@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/everything-list/a3039f54-142f-9558-984d-cfa5f65d56a0%40verizon.net.
On Fri, Oct 23, 2020 at 4:54 PM 'Brent Meeker' via Everything List <everyth...@googlegroups.com> wrote:
On 10/23/2020 8:15 AM, Jason Resch wrote:
On Tue, Oct 20, 2020 at 4:37 PM 'Brent Meeker' via Everything List <everyth...@googlegroups.com> wrote:
On 10/20/2020 1:20 PM, Jason Resch wrote:
On Tue, Oct 20, 2020 at 1:23 PM 'Brent Meeker' via Everything List <everyth...@googlegroups.com> wrote:
On 10/20/2020 5:39 AM, Bruno Marchal wrote:
On 15 Oct 2020, at 20:56, 'Brent Meeker' via Everything List <everyth...@googlegroups.com> wrote:
You should have read Vic Stenger's "The Fallacy of Fine Tuning". Vic points out how many examples of fine tuning are mis-conceived...including Hoyle's prediction of an excited state of carbon. Vic also points out the fallacy of just considering one parameter when the parameter space is high dimensional.
But my general criticism of fine-tuning is two-fold. First, the concept is not well defined. There is no apriori probability distribution over possible values. If the possible values are infinite, then any realized value is improbable.
I don’t think so. That is why Kolmogorov defines a measure space by forbidding infinite intersection of events. In the finite case the space of events is the complete boolean structure coming from the subset of the set of the possible results. In the infinite domain, the measure space os defined by a strict subset. I miss perhaps something, but the axiomatic of Kolmogorov has been invented to solve that “infinite number of value” problem.
That's a non-answer. I was just using infinite (as physicists do) to mean bigger than anything we're thinking of. Kolmogorov just shaped his definition to make the mathematics simpler. There's nothing in Jason's analyses that defines the variables as finite. Jason just helps jimself to an intuition that a value between 7.5 and 7.7 is "fine-tuned". He didn't first justify the finite interval.
I admit as much in the article. For most parameters, we don't understand the range or probability distribution for the constants.
Then how can you assert there is fine tuning. Is a value of 20+1 qualify? Does it matter whether the possible range was (0,100) or (19,21)?
However, see my explanation for the cosmological constant, a value for which the theory can account for the expected range and probability distribution.
That's right, there is a theory that tells us something about a range and probability distribution. But it's far from an accepted theory, and might well be wrong.
It comes out of QFT, perhaps our most strongly tested theory in science, at least one that offers the most accurate verified prediction in physics.
That "comes out of" is very misleading, since it's applying QFT to general relativity which is not even a quantum theory.
But the quantum fields (vacuum) are known to gravitate.
The first application of QFT to the problem gave the wrong answer by 120 orders of magnitude.
Wrong is the wrong word here. The answer was unexpectedly small by that many orders of magnitude, but it is still within the range of possibility.
I don't know what prediction you're referring to, there have been several. Can you cite the paper?
The prediction that the vacuum state contains energy, and that this energy under QFT is the sum of each of the field energies, some of which may be positive or negative, and when they are summed, they come out to be 120 orders of magnitude smaller than the Planck energy (which is the expected energy level of each field). I don't know of a reference to the paper, but I've read it was first calculated by Feynman and Wheeler. I also found this derivation: https://i.imgur.com/m0QhWOv.png
This paper gives three citations [6-8] to accompany this statement, which might also be useful to you:
"Nature contains two relative mass scales: the vacuum energy density V ∼ (10−30MPl) 4 and the weak scale v 2 ∼ (10−17MPl) 2 where v is the Higgs vacuum expectation value. Their smallness with respect to the Planck scale MPl = 1.2 1019 GeV is not understood and is considered as ‘unnatural’ in relativistic quantum field theory, because it seems to require precise cancellations among much larger contributions. If these cancellations happen for no fundamental reason, they are ‘unlikely’, in the sense that summing random order one numbers gives 10^−120 with a ‘probability’ of about 10^−120."
--
"No natural theoretical alternatives are known (for example, supergravity does not select V = 0 as a special point [1]), and anthropic selection of the cosmological constant seems possible in theories with some tens of scalars such that their potential has more than 10^120 different vacua, which get ‘populated’ forming a ‘multiverse’ through eternal inflation. String theory could realise this scenario [6–8]."[6] R. Bousso, J. Polchinski, “Quantization of four form fluxes and dynamical neutralization of the cosmological constant”, JHEP 0006 (2000) 006 [arXiv:hep-th/0004134].[7] L. Susskind, “The Anthropic landscape of string theory” [arXiv:hep-th/0302219].[8] S. Kachru, R. Kallosh, A.D. Linde, S.P. Trivedi, “De Sitter vacua in string theory”, Phys. Rev. D68 (2003) 046005 [arXiv:hep-th/0301240].
Jason
--
Brent
--It might well be wrong, but that would be more surprising to me than the idea of an anthropic selection process operating in a multiverse.
Jason
You received this message because you are subscribed to the Google Groups "Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email to everything-li...@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/everything-list/CA%2BBCJUiJ5c3kakLGC73zuRGX-6gafk0W7NGhuJJn9VOQEtzriA%40mail.gmail.com.
You received this message because you are subscribed to the Google Groups "Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email to everything-li...@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/everything-list/a3039f54-142f-9558-984d-cfa5f65d56a0%40verizon.net.
You received this message because you are subscribed to the Google Groups "Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email to everything-li...@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/everything-list/CA%2BBCJUh3auOJ8fLVaLUFF2dsyHZSHPd%3DXjDQRwLQFuTvJjiH4A%40mail.gmail.com.
On 10/23/2020 3:52 PM, Jason Resch wrote:
On Fri, Oct 23, 2020 at 4:54 PM 'Brent Meeker' via Everything List <everyth...@googlegroups.com> wrote:
On 10/23/2020 8:15 AM, Jason Resch wrote:
On Tue, Oct 20, 2020 at 4:37 PM 'Brent Meeker' via Everything List <everyth...@googlegroups.com> wrote:
On 10/20/2020 1:20 PM, Jason Resch wrote:
On Tue, Oct 20, 2020 at 1:23 PM 'Brent Meeker' via Everything List <everyth...@googlegroups.com> wrote:
On 10/20/2020 5:39 AM, Bruno Marchal wrote:
On 15 Oct 2020, at 20:56, 'Brent Meeker' via Everything List <everyth...@googlegroups.com> wrote:
You should have read Vic Stenger's "The Fallacy of Fine Tuning". Vic points out how many examples of fine tuning are mis-conceived...including Hoyle's prediction of an excited state of carbon. Vic also points out the fallacy of just considering one parameter when the parameter space is high dimensional.
But my general criticism of fine-tuning is two-fold. First, the concept is not well defined. There is no apriori probability distribution over possible values. If the possible values are infinite, then any realized value is improbable.
I don’t think so. That is why Kolmogorov defines a measure space by forbidding infinite intersection of events. In the finite case the space of events is the complete boolean structure coming from the subset of the set of the possible results. In the infinite domain, the measure space os defined by a strict subset. I miss perhaps something, but the axiomatic of Kolmogorov has been invented to solve that “infinite number of value” problem.
That's a non-answer. I was just using infinite (as physicists do) to mean bigger than anything we're thinking of. Kolmogorov just shaped his definition to make the mathematics simpler. There's nothing in Jason's analyses that defines the variables as finite. Jason just helps jimself to an intuition that a value between 7.5 and 7.7 is "fine-tuned". He didn't first justify the finite interval.
I admit as much in the article. For most parameters, we don't understand the range or probability distribution for the constants.
Then how can you assert there is fine tuning. Is a value of 20+1 qualify? Does it matter whether the possible range was (0,100) or (19,21)?
However, see my explanation for the cosmological constant, a value for which the theory can account for the expected range and probability distribution.
That's right, there is a theory that tells us something about a range and probability distribution. But it's far from an accepted theory, and might well be wrong.
It comes out of QFT, perhaps our most strongly tested theory in science, at least one that offers the most accurate verified prediction in physics.
That "comes out of" is very misleading, since it's applying QFT to general relativity which is not even a quantum theory.
But the quantum fields (vacuum) are known to gravitate.
"Known" how? You can write down a calculation...which give infinity as an answer.
Having arrived at an obviously wrong answer, you can introduce a cutoff that you guess at based on some dimensional analysis
and get an answer that's wrong by 120 orders of magnitude,
instead of infinitely. And you then say this shows we know something like this must be right???
The first application of QFT to the problem gave the wrong answer by 120 orders of magnitude.
Wrong is the wrong word here. The answer was unexpectedly small by that many orders of magnitude, but it is still within the range of possibility.
Which is exactly what's wrong with the idea of "fine-tuning". The "range of possibility" is just pulled out of thin air. Suppose life were possible for 1e-60 ev/m3 to 1e-20 ev/m3. Would that be "fine-tuning" because (1-e-20 - 1e-60)<<1 or because 30 orders of magnitude is small compared to infinity.
I don't know what prediction you're referring to, there have been several. Can you cite the paper?
The prediction that the vacuum state contains energy, and that this energy under QFT is the sum of each of the field energies, some of which may be positive or negative, and when they are summed, they come out to be 120 orders of magnitude smaller than the Planck energy (which is the expected energy level of each field). I don't know of a reference to the paper, but I've read it was first calculated by Feynman and Wheeler. I also found this derivation: https://i.imgur.com/m0QhWOv.png
This paper gives three citations [6-8] to accompany this statement, which might also be useful to you:
"Nature contains two relative mass scales: the vacuum energy density V ∼ (10−30MPl) 4 and the weak scale v 2 ∼ (10−17MPl) 2 where v is the Higgs vacuum expectation value. Their smallness with respect to the Planck scale MPl = 1.2 1019 GeV is not understood and is considered as ‘unnatural’ in relativistic quantum field theory, because it seems to require precise cancellations among much larger contributions. If these cancellations happen for no fundamental reason, they are ‘unlikely’, in the sense that summing random order one numbers gives 10^−120 with a ‘probability’ of about 10^−120."
But who says the random number are order 1.
It's all just fantasizing.
The only issue I ever had (and I love good sci fi) is Everett's position (and Bryce DeWitt's, John Wheeler) that universes spawn on the pop of a decision.
Perhaps, I was wondering if it was something more cosmological, such as a supernova or a black hole, being both the initiator and modifier of such events? A conscious observer (defined as sensorially self aware) could be the only thing that matters, so, a machine intel, which has part of it's network imitating spindle cells found in the brains higher mammals may suffice? Despite this, biologists have for some years been surprised at the intelligence of birds such as crows. In this case a crow might suffice for cosmos splitting, and we could thus, conclude, that, indeed, bird is the word.
and get an answer that's wrong by 120 orders of magnitude,
It's not wrong by 120 orders of magnitude,
it's unexpectedly small by 120 orders of magnitude.
Say you had a wheel, marking every number from 0 to 2Pi on a continuous range. And upon rolling it, you get 10^-120. This result is not "wrong" or "impossible", it's as likely as any other result. But a priori, you would not expect to get such a small number.
instead of infinitely. And you then say this shows we know something like this must be right???
I never said it must be right. Only that no known alternative explanation exists for the cosmological constant problem, and that according to QFT, the vacuum energy shouldn't be zero, and is known to not be zero (e.g. casimir effect, lamb shift, and accelerated expansion of the universe, all count as evidence that it is non zero).The first application of QFT to the problem gave the wrong answer by 120 orders of magnitude.
Wrong is the wrong word here. The answer was unexpectedly small by that many orders of magnitude, but it is still within the range of possibility.
Which is exactly what's wrong with the idea of "fine-tuning". The "range of possibility" is just pulled out of thin air. Suppose life were possible for 1e-60 ev/m3 to 1e-20 ev/m3. Would that be "fine-tuning" because (1-e-20 - 1e-60)<<1 or because 30 orders of magnitude is small compared to infinity.
It depends on the probability distribution of the variable.
I think a more objective way to measure fine-tuning is to weigh universes and physical laws by their Kolmogorov complexity -- what's the shortest possible description that produces them?
The longer the length of the description, the more "tuning" was required to get there, and the rarer such universes are.
In our case, Lambda would add ~120 digits to the cost of our universe in terms of additional information required to describe it.
If the multiverse is real, we should expect that the Kolmogorov complexity of our universe is not much greater than the minimum for universes that produce conscious life. (Perhaps further weighted in terms of the number of observers each such universe produces).
I don't know what prediction you're referring to, there have been several. Can you cite the paper?
The prediction that the vacuum state contains energy, and that this energy under QFT is the sum of each of the field energies, some of which may be positive or negative, and when they are summed, they come out to be 120 orders of magnitude smaller than the Planck energy (which is the expected energy level of each field). I don't know of a reference to the paper, but I've read it was first calculated by Feynman and Wheeler. I also found this derivation: https://i.imgur.com/m0QhWOv.png
This paper gives three citations [6-8] to accompany this statement, which might also be useful to you:
"Nature contains two relative mass scales: the vacuum energy density V ∼ (10−30MPl) 4 and the weak scale v 2 ∼ (10−17MPl) 2 where v is the Higgs vacuum expectation value. Their smallness with respect to the Planck scale MPl = 1.2 1019 GeV is not understood and is considered as ‘unnatural’ in relativistic quantum field theory, because it seems to require precise cancellations among much larger contributions. If these cancellations happen for no fundamental reason, they are ‘unlikely’, in the sense that summing random order one numbers gives 10^−120 with a ‘probability’ of about 10^−120."
But who says the random number are order 1.
It's all just fantasizing.
It's using the Planck scale as the upper bound.